Linux RAID subsystem development
 help / color / mirror / Atom feed
* PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
@ 2024-08-06 14:10 Christian Theune
  2024-08-06 14:10 ` Christian Theune
  2024-08-07  2:55 ` Yu Kuai
  0 siblings, 2 replies; 88+ messages in thread
From: Christian Theune @ 2024-08-06 14:10 UTC (permalink / raw)
  To: linux-raid, dm-devel

[-- Attachment #1: Type: text/plain, Size: 1123 bytes --]

Hi,

we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.

Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.

I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.

Kernel:

Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023

See the kernel config attached.

[-- Attachment #2: config.gz --]
[-- Type: application/x-gzip, Size: 59076 bytes --]

[-- Attachment #3: Type: text/plain, Size: 36240 bytes --]



I can’t say whether there’s an earlier version that worked as this specific config is relatively new: we switched to dm-crypt about half a year ago and haven’t seen any issues. We’ve been using NVMe drives all over the place and we’ve been using MD raid for a long time. MD + dm-crypt has also been running for a while now without this - the new thing we introduced is that the overall combination of MD + dm-crypt + NVMe drives is new.

Here’s the output I’m seeing:

[Aug 6 13:17] INFO: task kworker/u64:4:433 blocked for more than 122 seconds.
[  +0.007829]       Not tainted 5.15.138 #1-NixOS
[  +0.005085] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008801] task:kworker/u64:4   state:D stack:    0 pid:  433 ppid:     2 flags:0x00004000
[  +0.000005] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000009] Call Trace:
[  +0.000001]  <TASK>
[  +0.000002]  __schedule+0x373/0x1580
[  +0.000006]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000003]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000005]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000005]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000003]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000002]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000003]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000002]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>
[  +0.000001] INFO: task kworker/u64:6:435 blocked for more than 122 seconds.
[  +0.007823]       Not tainted 5.15.138 #1-NixOS
[  +0.005092] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:6   state:D stack:    0 pid:  435 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  ? sysvec_call_function_single+0xa/0x90
[  +0.000001]  ? asm_sysvec_call_function_single+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000000] INFO: task kworker/u64:7:436 blocked for more than 122 seconds.
[  +0.007829]       Not tainted 5.15.138 #1-NixOS
[  +0.005095] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:7   state:D stack:    0 pid:  436 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000000]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  ? __blk_queue_split+0x2d0/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50 
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:11:440 blocked for more than 122 seconds.
[  +0.007923]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008800] task:kworker/u64:11  state:D stack:    0 pid:  440 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000000]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:13:442 blocked for more than 122 seconds.
[  +0.007919]       Not tainted 5.15.138 #1-NixOS
[  +0.005094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:13  state:D stack:    0 pid:  442 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000002] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000001]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000001]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000004]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000001]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000002]  ? set_kthread_struct+0x50/0x50
[  +0.000000]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>
[  +0.000004] INFO: task kworker/u64:14:443 blocked for more than 122 seconds.
[  +0.007923]       Not tainted 5.15.138 #1-NixOS
[  +0.005096] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008803] task:kworker/u64:14  state:D stack:    0 pid:  443 ppid:     2 flags:0x00004000
[  +0.000004] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000006]  schedule+0x5b/0xe0
[  +0.000001]  raid5_get_active_stripe+0x50c/0x6c0 [raid456]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000002]  raid5_make_request+0x18c/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000002]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:15:444 blocked for more than 123 seconds.
[  +0.007922]       Not tainted 5.15.138 #1-NixOS
[  +0.005096] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008802] task:kworker/u64:15  state:D stack:    0 pid:  444 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000002] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000003]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000001]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:17:446 blocked for more than 123 seconds.
[  +0.007922]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008800] task:kworker/u64:17  state:D stack:    0 pid:  446 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000000]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000001]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000003]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000002]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:18:447 blocked for more than 123 seconds.
[  +0.007919]       Not tainted 5.15.138 #1-NixOS
[  +0.005094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008803] task:kworker/u64:18  state:D stack:    0 pid:  447 ppid:     2 flags:0x00004000
[  +0.000004] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[  +0.000042] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000004]  ? dm_submit_bio+0x1e8/0x380 [dm_mod]
[  +0.000004]  schedule+0x5b/0xe0
[  +0.000001]  xlog_state_get_iclog_space+0x10d/0x320 [xfs]
[  +0.000028]  ? wake_up_q+0x90/0x90
[  +0.000002]  xlog_write+0x148/0x6c0 [xfs]
[  +0.000024]  xlog_cil_push_work+0x37f/0x4c0 [xfs]
[  +0.000025]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000002]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:20:449 blocked for more than 123 seconds.
[  +0.007921]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008802] task:kworker/u64:20  state:D stack:    0 pid:  449 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000003]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000004]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000001]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>

Environment:

CPU:

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 25
model           : 17
model name      : AMD EPYC 9124 16-Core Processor
stepping        : 1
microcode       : 0xa101144
cpu MHz         : 3000.000
cache size      : 1024 KB
physical id     : 0
siblings        : 32
core id         : 0
cpu cores       : 16
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 16
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
bugs            : sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass srso
bogomips        : 6000.04
TLB size        : 3584 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 52 bits physical, 57 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

Modules:

tls 118784 0 - Live 0xffffffffc16af000
dm_crypt 61440 1 - Live 0xffffffffc169f000
cbc 16384 0 - Live 0xffffffffc1697000
encrypted_keys 24576 1 dm_crypt, Live 0xffffffffc168c000
trusted 40960 2 dm_crypt,encrypted_keys, Live 0xffffffffc167c000
asn1_encoder 16384 1 trusted, Live 0xffffffffc1675000
tee 36864 1 trusted, Live 0xffffffffc1666000
tpm 94208 1 trusted, Live 0xffffffffc1646000
af_packet 57344 10 - Live 0xffffffffc161e000
vxlan 81920 0 - Live 0xffffffffc1585000
ip6_udp_tunnel 16384 1 vxlan, Live 0xffffffffc157a000
udp_tunnel 20480 1 vxlan, Live 0xffffffffc1570000
dummy 16384 0 - Live 0xffffffffc1275000
msr 16384 0 - Live 0xffffffffc1188000
ext4 937984 1 - Live 0xffffffffc1476000
crc16 16384 1 ext4, Live 0xffffffffc0f5b000
mbcache 16384 1 ext4, Live 0xffffffffc0ee6000
jbd2 167936 1 ext4, Live 0xffffffffc1146000
edac_mce_amd 36864 0 - Live 0xffffffffc170a000
edac_core 69632 0 - Live 0xffffffffc1253000
intel_rapl_msr 20480 0 - Live 0xffffffffc12ff000
intel_rapl_common 28672 1 intel_rapl_msr, Live 0xffffffffc108c000
kvm_amd 147456 0 - Live 0xffffffffc11e0000
sd_mod 57344 1 - Live 0xffffffffc11d1000
kvm 1056768 1 kvm_amd, Live 0xffffffffc1373000
irqbypass 16384 1 kvm, Live 0xffffffffc1311000
crc32_pclmul 16384 0 - Live 0xffffffffc0f29000
ghash_clmulni_intel 16384 0 - Live 0xffffffffc0f1f000
ipmi_ssif 45056 0 - Live 0xffffffffc12ea000
aesni_intel 380928 2 - Live 0xffffffffc127e000
libaes 16384 1 aesni_intel, Live 0xffffffffc120c000
crypto_simd 16384 1 aesni_intel, Live 0xffffffffc1369000
cryptd 28672 3 ghash_clmulni_intel,crypto_simd, Live 0xffffffffc12f7000
rapl 16384 0 - Live 0xffffffffc0ebb000
ast 57344 0 - Live 0xffffffffc1064000
drm_vram_helper 24576 1 ast, Live 0xffffffffc0ff5000
drm_ttm_helper 16384 2 ast,drm_vram_helper, Live 0xffffffffc0f40000
ttm 86016 2 drm_vram_helper,drm_ttm_helper, Live 0xffffffffc11bb000
mlx5_ib 397312 0 - Live 0xffffffffc10e4000
drm_kms_helper 311296 4 ast,drm_vram_helper, Live 0xffffffffc1097000
uas 32768 1 - Live 0xffffffffc1040000
usb_storage 81920 1 uas, Live 0xffffffffc1077000
ib_uverbs 172032 1 mlx5_ib, Live 0xffffffffc1009000
agpgart 40960 1 ttm, Live 0xffffffffc0ffe000
fb_sys_fops 16384 1 drm_kms_helper, Live 0xffffffffc0ee1000
syscopyarea 16384 1 drm_kms_helper, Live 0xffffffffc0eda000
sysfillrect 16384 1 drm_kms_helper, Live 0xffffffffc0ecf000
ib_core 421888 2 mlx5_ib,ib_uverbs, Live 0xffffffffc0f7a000
sysimgblt 16384 1 drm_kms_helper, Live 0xffffffffc0eca000
ccp 114688 1 kvm_amd, Live 0xffffffffc0f02000
i2c_piix4 28672 0 - Live 0xffffffffc0f72000
rng_core 16384 2 tpm,ccp, Live 0xffffffffc0f69000
acpi_ipmi 20480 0 - Live 0xffffffffc0f63000
ipmi_si 73728 2 - Live 0xffffffffc0f48000
pinctrl_amd 32768 0 - Live 0xffffffffc0ef9000
i2c_designware_platform 16384 0 - Live 0xffffffffc0ec5000
8250_dw 16384 0 - Live 0xffffffffc0ef0000
i2c_designware_core 36864 1 i2c_designware_platform, Live 0xffffffffc0eaf000
acpi_cpufreq 28672 0 - Live 0xffffffffc0ea3000
xt_limit 16384 2 - Live 0xffffffffc0e9b000
xt_CT 16384 6 - Live 0xffffffffc0e7d000
ip6table_nat 16384 1 - Live 0xffffffffc0e78000
iptable_nat 16384 1 - Live 0xffffffffc0e73000
xt_conntrack 16384 2 - Live 0xffffffffc0e6e000
ip6t_rpfilter 16384 1 - Live 0xffffffffc0e66000
ipt_rpfilter 16384 1 - Live 0xffffffffc0e61000
ip6table_raw 16384 1 - Live 0xffffffffc0e5c000
iptable_raw 16384 1 - Live 0xffffffffc0e54000
xt_pkttype 16384 4 - Live 0xffffffffc0e4c000
xt_LOG 20480 4 - Live 0xffffffffc0e43000
nf_log_syslog 24576 4 - Live 0xffffffffc0e39000
ip6t_REJECT 16384 2 - Live 0xffffffffc0e31000
nf_reject_ipv6 20480 1 ip6t_REJECT, Live 0xffffffffc0e29000
ipt_REJECT 16384 2 - Live 0xffffffffc0e1d000
nf_reject_ipv4 16384 1 ipt_REJECT, Live 0xffffffffc0e18000
tcp_bbr 20480 30 - Live 0xffffffffc0e0f000
xt_tcpudp 20480 13 - Live 0xffffffffc0e06000
sch_fq_codel 20480 69 - Live 0xffffffffc0ddf000
ip6table_filter 16384 1 - Live 0xffffffffc0da7000
ip6_tables 36864 5 ip6table_nat,ip6table_raw,ip6table_filter, Live 0xffffffffc0d57000
iptable_filter 16384 1 - Live 0xffffffffc08df000
nf_nat_ftp 20480 0 - Live 0xffffffffc0e00000
nf_conntrack_ftp 24576 1 nf_nat_ftp, Live 0xffffffffc0df9000
nf_nat 57344 3 ip6table_nat,iptable_nat,nf_nat_ftp, Live 0xffffffffc0dea000
nf_conntrack 167936 5 xt_CT,xt_conntrack,nf_nat_ftp,nf_conntrack_ftp,nf_nat, Live 0xffffffffc0db3000
nf_defrag_ipv6 24576 1 nf_conntrack, Live 0xffffffffc0d50000
nf_defrag_ipv4 16384 1 nf_conntrack, Live 0xffffffffc0dae000
atkbd 40960 0 - Live 0xffffffffc0d9c000
libps2 20480 1 atkbd, Live 0xffffffffc0d96000
serio 28672 1 atkbd, Live 0xffffffffc0d8e000
loop 40960 0 - Live 0xffffffffc0d83000
tun 61440 0 - Live 0xffffffffc0d73000
tap 28672 0 - Live 0xffffffffc0d6b000
macvlan 28672 0 - Live 0xffffffffc0d63000
bridge 303104 0 - Live 0xffffffffc0d05000
stp 16384 1 bridge, Live 0xffffffffc0cfc000
llc 16384 2 bridge,stp, Live 0xffffffffc0cf4000
mq_deadline 32768 1 - Live 0xffffffffc0ce7000
ipmi_watchdog 32768 1 - Live 0xffffffffc0cde000
ipmi_devintf 20480 0 - Live 0xffffffffc0cd8000
ipmi_msghandler 73728 5 ipmi_ssif,acpi_ipmi,ipmi_si,ipmi_watchdog,ipmi_devintf, Live 0xffffffffc0747000
rbd 122880 0 - Live 0xffffffffc0841000
libceph 462848 1 rbd, Live 0xffffffffc086b000
drm 598016 6 ast,drm_vram_helper,drm_ttm_helper,ttm,drm_kms_helper, Live 0xffffffffc0c45000
fuse 155648 1 - Live 0xffffffffc070a000
backlight 24576 2 drm_kms_helper,drm, Live 0xffffffffc0703000
configfs 57344 1 - Live 0xffffffffc06ec000
ip_tables 36864 4 iptable_nat,iptable_raw,iptable_filter, Live 0xffffffffc06e2000
x_tables 53248 18 xt_limit,xt_CT,ip6table_nat,iptable_nat,xt_conntrack,ip6t_rpfilter,ipt_rpfilter,ip6table_raw,iptable_raw,xt_pkttype,xt_LOG,ip6t_REJECT,ipt_REJECT,xt_tcpudp,ip6table_filter,ip6_tables,iptable_filter,ip_tables, Live 0xffffffffc06cd000
autofs4 53248 0 - Live 0xffffffffc057f000
xfs 2052096 4 - Live 0xffffffffc0a4f000
raid456 184320 1 - Live 0xffffffffc062e000
async_raid6_recov 24576 1 raid456, Live 0xffffffffc0627000
async_memcpy 20480 2 raid456,async_raid6_recov, Live 0xffffffffc0621000
async_pq 20480 2 raid456,async_raid6_recov, Live 0xffffffffc0618000
async_xor 20480 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc0612000
xor 24576 1 async_xor, Live 0xffffffffc0607000
async_tx 20480 5 raid456,async_raid6_recov,async_memcpy,async_pq,async_xor, Live 0xffffffffc0601000
raid6_pq 122880 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc0594000
libcrc32c 16384 5 nf_nat,nf_conntrack,libceph,xfs,raid456, Live 0xffffffffc057a000
crc32c_generic 16384 0 - Live 0xffffffffc0509000
igb 270336 0 - Live 0xffffffffc05be000
xhci_pci 24576 0 - Live 0xffffffffc0573000
i2c_algo_bit 16384 2 ast,igb, Live 0xffffffffc04fd000
xhci_pci_renesas 20480 1 xhci_pci, Live 0xffffffffc04e4000
crc32c_intel 24576 3 - Live 0xffffffffc0864000
mlx5_core 1400832 1 mlx5_ib, Live 0xffffffffc08f8000
xhci_hcd 315392 1 xhci_pci, Live 0xffffffffc07d2000
ahci 49152 0 - Live 0xffffffffc0524000
libahci 49152 1 ahci, Live 0xffffffffc08eb000
usbcore 339968 4 uas,usb_storage,xhci_pci,xhci_hcd, Live 0xffffffffc077e000
libata 307200 2 ahci,libahci, Live 0xffffffffc0681000
i2c_core 106496 9 ipmi_ssif,ast,drm_kms_helper,i2c_piix4,i2c_designware_platform,i2c_designware_core,drm,igb,i2c_algo_bit, Live 0xffffffffc0666000
nvme 49152 13 - Live 0xffffffffc0566000
mlxfw 36864 1 mlx5_core, Live 0xffffffffc0774000
dca 16384 1 igb, Live 0xffffffffc065d000
pci_hyperv_intf 16384 1 mlx5_core, Live 0xffffffffc04f3000
psample 20480 1 mlx5_core, Live 0xffffffffc05b8000
nvme_core 139264 15 nvme, Live 0xffffffffc0543000
ptp 32768 2 igb,mlx5_core, Live 0xffffffffc049b000
pps_core 24576 1 ptp, Live 0xffffffffc0538000
t10_pi 16384 2 sd_mod,nvme_core, Live 0xffffffffc0517000
usb_common 16384 2 xhci_hcd,usbcore, Live 0xffffffffc050f000
crc_t10dif 20480 1 t10_pi, Live 0xffffffffc051e000
crct10dif_generic 16384 0 - Live 0xffffffffc04f8000
crct10dif_pclmul 16384 1 - Live 0xffffffffc04eb000
crct10dif_common 16384 3 crc_t10dif,crct10dif_generic,crct10dif_pclmul, Live 0xffffffffc0496000
rtc_cmos 28672 1 - Live 0xffffffffc041b000
dm_mod 155648 15 dm_crypt, Live 0xffffffffc04bd000
bfq 90112 0 - Live 0xffffffffc04a6000
mpt3sas 348160 0 - Live 0xffffffffc0440000
raid_class 16384 1 mpt3sas, Live 0xffffffffc0438000
scsi_transport_sas 49152 1 mpt3sas, Live 0xffffffffc0425000
megaraid_sas 180224 0 - Live 0xffffffffc03ee000
scsi_mod 274432 8 sd_mod,uas,usb_storage,libata,mpt3sas,raid_class,scsi_transport_sas,megaraid_sas, Live 0xffffffffc03aa000
scsi_common 16384 5 uas,usb_storage,libata,mpt3sas,scsi_mod, Live 0xffffffffc03a5000

Driver/Hardware Info:

0000-02ff : PCI Bus 0000:00
  0000-001f : dma1
  0020-0021 : pic1
  0040-0043 : timer0
  0050-0053 : timer1
  0060-0060 : keyboard
  0061-0061 : PNP0800:00
  0064-0064 : keyboard
  0070-0071 : rtc0
  0080-008f : dma page reg
  00a0-00a1 : pic2
  00b2-00b2 : APEI ERST
  00c0-00df : dma2
  00f0-00ff : fpu
0300-03af : PCI Bus 0000:00
03b0-03df : PCI Bus 0000:c0
  03c0-03df : vga+
03e0-0cf7 : PCI Bus 0000:00
  03f8-03ff : serial
  040b-040b : pnp 00:04
  04d0-04d1 : pnp 00:04
  04d6-04d6 : pnp 00:04
  0800-089f : pnp 00:04
    0800-0803 : ACPI PM1a_EVT_BLK
    0804-0805 : ACPI PM1a_CNT_BLK
    0808-080b : ACPI PM_TMR
    0820-0827 : ACPI GPE0_BLK
  0900-090f : pnp 00:04
  0910-091f : pnp 00:04
  0a00-0a3f : pnp 00:02
  0a40-0a5f : pnp 00:02
  0a60-0a6f : pnp 00:02
  0a70-0a7f : pnp 00:02
  0a80-0a8f : pnp 00:02
  0b00-0b0f : pnp 00:04
    0b00-0b08 : piix4_smbus
  0b20-0b3f : pnp 00:04
    0b20-0b28 : piix4_smbus
  0c00-0c01 : pnp 00:04
  0c14-0c14 : pnp 00:04
  0c50-0c51 : pnp 00:04
  0c52-0c52 : pnp 00:04
  0c6c-0c6c : pnp 00:04
  0c6f-0c6f : pnp 00:04
  0ca2-0ca2 : IPI0001:00
    0ca2-0ca2 : IPMI Address 1
      0ca2-0ca2 : ipmi_si
  0ca3-0ca3 : IPI0001:00
    0ca3-0ca3 : IPMI Address 2
      0ca3-0ca3 : ipmi_si
  0cd0-0cd1 : pnp 00:04
  0cd2-0cd3 : pnp 00:04
  0cd4-0cd5 : pnp 00:04
  0cd6-0cd7 : pnp 00:04
  0cd8-0cdf : pnp 00:04
0cf8-0cff : PCI conf1
1000-2fff : PCI Bus 0000:00
  1000-1fff : PCI Bus 0000:08
  2000-2fff : PCI Bus 0000:07
3000-3fff : PCI Bus 0000:40
  3000-3fff : PCI Bus 0000:48
4000-5fff : PCI Bus 0000:80
  4000-4fff : PCI Bus 0000:85
  5000-5fff : PCI Bus 0000:84
6000-ffff : PCI Bus 0000:c0
  6000-6fff : PCI Bus 0000:c1
  7000-7fff : PCI Bus 0000:c2
  8000-8fff : PCI Bus 0000:c3
  9000-9fff : PCI Bus 0000:c4
  e000-efff : PCI Bus 0000:c7
    e000-e01f : 0000:c7:00.0
  f000-ffff : PCI Bus 0000:c5
    f000-ffff : PCI Bus 0000:c6
      f000-f07f : 0000:c6:00.0

00000000-00000fff : Reserved
00001000-0008abff : System RAM
0008ac00-0008ffff : RAM buffer
000a0000-000bffff : PCI Bus 0000:c0
000c0000-000dffff : PCI Bus 0000:00
  000c0000-000c7fff : Video ROM
  000cb000-000cbfff : Adapter ROM
000e0000-000fffff : Reserved
  000f0000-000fffff : System ROM
00100000-75b9dfff : System RAM
75b9e000-75b9ffff : RAM buffer
75ba0000-75be3fff : ACPI Non-volatile Storage
75be4000-75c9ffff : System RAM
75ca0000-75ffffff : Reserved
76000000-9df7ffff : System RAM
9df80000-a34e6fff : Reserved
  a249d018-a249d019 : APEI ERST
  a249d01c-a249d021 : APEI ERST
  a249d028-a249d039 : APEI ERST
  a249d040-a249d04c : APEI ERST
  a249d050-a249f04f : APEI ERST
a34e7000-a35d1fff : ACPI Tables
a35d2000-a3a52fff : ACPI Non-volatile Storage
a3a53000-a63fefff : Reserved
a63ff000-a7f54fff : System RAM
a7f55000-a7ffcfff : RAM buffer
a7ffd000-afffffff : Reserved
b0000000-b22fffff : PCI Bus 0000:40
  b0000000-b01fffff : PCI Bus 0000:44
  b0200000-b03fffff : PCI Bus 0000:45
  b0400000-b05fffff : PCI Bus 0000:46
  b0600000-b07fffff : PCI Bus 0000:47
  b0800000-b09fffff : PCI Bus 0000:48
  b2000000-b20fffff : PCI Bus 0000:43
    b2000000-b200ffff : 0000:43:00.0
    b2010000-b2017fff : 0000:43:00.0
      b2010000-b2017fff : nvme
  b2100000-b21fffff : PCI Bus 0000:42
    b2100000-b210ffff : 0000:42:00.0
    b2110000-b2117fff : 0000:42:00.0
      b2110000-b2117fff : nvme
  b2200000-b22fffff : PCI Bus 0000:41
    b2200000-b220ffff : 0000:41:00.0
    b2210000-b2217fff : 0000:41:00.0
      b2210000-b2217fff : nvme
b8180000-b8180fff : Reserved
  b8180000-b81803ff : IOAPIC 4
b9200000-b92fffff : Reserved
b9410500-b94fffff : AMDI0095:00
b9500000-b9500fff : Reserved
  b9500000-b95003ff : IOAPIC 1
ba000000-bddfffff : PCI Bus 0000:c0
  ba000000-ba1fffff : PCI Bus 0000:c1
  ba200000-ba3fffff : PCI Bus 0000:c2
  ba400000-ba5fffff : PCI Bus 0000:c3
  ba600000-ba7fffff : PCI Bus 0000:c4
  bc000000-bd0fffff : PCI Bus 0000:c5
    bc000000-bd0fffff : PCI Bus 0000:c6
      bc000000-bcffffff : 0000:c6:00.0
      bd000000-bd03ffff : 0000:c6:00.0
  bd200000-bd2fffff : PCI Bus 0000:c9
    bd200000-bd2007ff : 0000:c9:00.1
      bd200000-bd2007ff : ahci
    bd201000-bd2017ff : 0000:c9:00.0
      bd201000-bd2017ff : ahci
  bd300000-bd3fffff : PCI Bus 0000:c8
    bd300000-bd3fffff : 0000:c8:00.4
      bd300000-bd3fffff : xhci-hcd
  bd400000-bd4fffff : PCI Bus 0000:c7
    bd400000-bd47ffff : 0000:c7:00.0
      bd400000-bd47ffff : igb
    bd480000-bd483fff : 0000:c7:00.0
      bd480000-bd483fff : igb
e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
  e0000000-efffffff : Reserved
    e0000000-efffffff : pnp 00:00
f0000000-f21fffff : PCI Bus 0000:80
  f0000000-f01fffff : PCI Bus 0000:82
  f0200000-f03fffff : PCI Bus 0000:83
  f0400000-f05fffff : PCI Bus 0000:84
  f0600000-f07fffff : PCI Bus 0000:85
  f2000000-f21fffff : PCI Bus 0000:81
    f2000000-f20fffff : 0000:81:00.1
    f2100000-f21fffff : 0000:81:00.0
f4180000-f4180fff : Reserved
  f4180000-f41803ff : IOAPIC 2
f5180000-f5180fff : Reserved
  f5180000-f51803ff : IOAPIC 3
f6000000-f8cfffff : PCI Bus 0000:00
  f8000000-f82fffff : PCI Bus 0000:0a
    f8000000-f80fffff : 0000:0a:00.5
      f8000000-f80fffff : ccp
    f8100000-f81fffff : 0000:0a:00.4
      f8100000-f81fffff : xhci-hcd
    f8200000-f8201fff : 0000:0a:00.5
      f8200000-f8201fff : ccp
  f8300000-f83fffff : PCI Bus 0000:0b
    f8300000-f83007ff : 0000:0b:00.1
      f8300000-f83007ff : ahci
    f8301000-f83017ff : 0000:0b:00.0
      f8301000-f83017ff : ahci
  f8400000-f84fffff : PCI Bus 0000:09
    f8400000-f840ffff : 0000:09:00.0
    f8410000-f8413fff : 0000:09:00.0
      f8410000-f8413fff : nvme
  f8500000-f85fffff : PCI Bus 0000:08
    f8500000-f850ffff : 0000:08:00.0
    f8510000-f8517fff : 0000:08:00.0
      f8510000-f8517fff : nvme
  f8600000-f86fffff : PCI Bus 0000:07
    f8600000-f860ffff : 0000:07:00.0
    f8610000-f8617fff : 0000:07:00.0
      f8610000-f8617fff : nvme
  f8700000-f87fffff : PCI Bus 0000:06
    f8700000-f870ffff : 0000:06:00.0
    f8710000-f8717fff : 0000:06:00.0
      f8710000-f8717fff : nvme
  f8800000-f88fffff : PCI Bus 0000:05
    f8800000-f880ffff : 0000:05:00.0
    f8810000-f8817fff : 0000:05:00.0
      f8810000-f8817fff : nvme
  f8900000-f89fffff : PCI Bus 0000:04
    f8900000-f890ffff : 0000:04:00.0
    f8910000-f8917fff : 0000:04:00.0
      f8910000-f8917fff : nvme
  f8a00000-f8afffff : PCI Bus 0000:03
    f8a00000-f8a0ffff : 0000:03:00.0
    f8a10000-f8a17fff : 0000:03:00.0
      f8a10000-f8a17fff : nvme
  f8b00000-f8bfffff : PCI Bus 0000:02
    f8b00000-f8b0ffff : 0000:02:00.0
    f8b10000-f8b17fff : 0000:02:00.0
      f8b10000-f8b17fff : nvme
  f8c00000-f8cfffff : PCI Bus 0000:01
    f8c00000-f8c0ffff : 0000:01:00.0
    f8c10000-f8c17fff : 0000:01:00.0
      f8c10000-f8c17fff : nvme
fea00000-feafffff : Reserved
fec00000-fec00fff : Reserved
  fec00000-fec003ff : IOAPIC 0
fec10000-fec10fff : Reserved
  fec10000-fec10fff : pnp 00:04
fed00000-fed00fff : Reserved
  fed00000-fed003ff : HPET 0
fed40000-fed44fff : Reserved
fed80000-fed8ffff : Reserved
  fed80000-fed814ff : pnp 00:04
  fed81500-fed818ff : AMDI0030:00
  fed81900-fed8ffff : pnp 00:04
fedc0000-fedc0fff : Reserved
  fedc0000-fedc0fff : pnp 00:04
fedc2000-fedc8fff : Reserved
  fedc2000-fedc2fff : AMDI0010:00
    fedc2000-fedc2fff : AMDI0010:00 AMDI0010:00
  fedc3000-fedc3fff : AMDI0010:01
    fedc3000-fedc3fff : AMDI0010:01 AMDI0010:01
  fedc4000-fedc4fff : AMDI0010:02
    fedc4000-fedc4fff : AMDI0010:02 AMDI0010:02
  fedc5000-fedc5fff : AMDI0010:03
    fedc5000-fedc5fff : AMDI0010:03 AMDI0010:03
  fedc7000-fedc7fff : AMDI0020:00
  fedc8000-fedc8fff : AMDI0020:01
fedc9000-fedc9fff : AMDI0020:00
  fedc9000-fedc901f : serial
fedca000-fedcafff : AMDI0020:01
  fedca000-fedca01f : serial
fedcb000-fedcbfff : AMDI0010:05
  fedcb000-fedcbfff : AMDI0010:05 AMDI0010:05
fee00000-feefffff : Reserved
  fee00000-fee00fff : Local APIC
    fee00000-fee00fff : pnp 00:04
ff000000-ffffffff : Reserved
  ff000000-ffffffff : pnp 00:04
100000000-184dbbffff : System RAM
  7b4200000-7b4e01f81 : Kernel code
  7b5000000-7b5745fff : Kernel rodata
  7b5800000-7b5a3e7ff : Kernel data
  7b5fb5000-7b63fffff : Kernel bss
184dbc0000-184fffffff : Reserved
10021000000-30020ffffff : PCI Bus 0000:c0
  10021000000-100211fffff : PCI Bus 0000:c1
  10021200000-100213fffff : PCI Bus 0000:c2
  10021400000-100215fffff : PCI Bus 0000:c3
  10021600000-100217fffff : PCI Bus 0000:c4
  30020f00000-30020ffffff : PCI Bus 0000:c8
    30020f00000-30020f7ffff : 0000:c8:00.1
    30020f80000-30020ffffff : 0000:c8:00.1
30021000000-50020ffffff : PCI Bus 0000:80
  30021000000-300211fffff : PCI Bus 0000:82
  30021200000-300213fffff : PCI Bus 0000:83
  30021400000-300215fffff : PCI Bus 0000:84
  30021600000-300217fffff : PCI Bus 0000:85
  5001a000000-5001effffff : PCI Bus 0000:81
    5001a000000-5001bffffff : 0000:81:00.1
      5001a000000-5001bffffff : mlx5_core
    5001c000000-5001dffffff : 0000:81:00.0
      5001c000000-5001dffffff : mlx5_core
    5001e000000-5001e7fffff : 0000:81:00.1
    5001e800000-5001effffff : 0000:81:00.0
  5001f100000-5001f1fffff : PCI Bus 0000:86
    5001f100000-5001f17ffff : 0000:86:00.1
    5001f180000-5001f1fffff : 0000:86:00.1
50081000000-70080ffffff : PCI Bus 0000:00
  50081000000-500811fffff : PCI Bus 0000:01
  50081200000-500813fffff : PCI Bus 0000:02
  50081400000-500815fffff : PCI Bus 0000:03
  50081600000-500817fffff : PCI Bus 0000:04
  50081800000-500819fffff : PCI Bus 0000:05
  50081a00000-50081bfffff : PCI Bus 0000:06
  50081c00000-50081dfffff : PCI Bus 0000:07
  50081e00000-50081ffffff : PCI Bus 0000:08
  70080f00000-70080ffffff : PCI Bus 0000:0a
    70080f00000-70080f7ffff : 0000:0a:00.1
    70080f80000-70080ffffff : 0000:0a:00.1
70081000000-90080ffffff : PCI Bus 0000:40
  70081000000-700811fffff : PCI Bus 0000:41
  70081200000-700813fffff : PCI Bus 0000:42
  70081400000-700815fffff : PCI Bus 0000:43
  70081600000-700817fffff : PCI Bus 0000:44
  70081800000-700819fffff : PCI Bus 0000:45
  70081a00000-70081bfffff : PCI Bus 0000:46
  70081c00000-70081dfffff : PCI Bus 0000:47
  70081e00000-70081ffffff : PCI Bus 0000:48
  90080f00000-90080ffffff : PCI Bus 0000:49
    90080f00000-90080f7ffff : 0000:49:00.1
    90080f80000-90080ffffff : 0000:49:00.1
3ffc00000000-3ffc03ffffff : Reserved

PCI: see attached

[-- Attachment #4: lspci --]
[-- Type: application/octet-stream, Size: 283072 bytes --]

00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 28
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee04000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 29
	NUMA node: 0
	Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8c00000-f8cfffff [size=1M]
	Prefetchable memory behind bridge: 0000050081000000-00000500811fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #1, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee06000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 30
	NUMA node: 0
	Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8b00000-f8bfffff [size=1M]
	Prefetchable memory behind bridge: 0000050081200000-00000500813fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #2, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee10000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 31
	NUMA node: 0
	Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8a00000-f8afffff [size=1M]
	Prefetchable memory behind bridge: 0000050081400000-00000500815fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #3, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee12000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 32
	NUMA node: 0
	Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8900000-f89fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081600000-00000500817fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #4, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee14000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 33
	NUMA node: 0
	Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8800000-f88fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081800000-00000500819fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #5, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee16000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 34
	NUMA node: 0
	Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8700000-f87fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081a00000-0000050081bfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #6, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee18000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 35
	NUMA node: 0
	Bus: primary=00, secondary=07, subordinate=07, sec-latency=0
	I/O behind bridge: 00002000-00002fff [size=4K]
	Memory behind bridge: f8600000-f86fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081c00000-0000050081dfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #7, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1a000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 36
	NUMA node: 0
	Bus: primary=00, secondary=08, subordinate=08, sec-latency=0
	I/O behind bridge: 00001000-00001fff [size=4K]
	Memory behind bridge: f8500000-f85fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081e00000-0000050081ffffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #8, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1c000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:05.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 37
	NUMA node: 0
	Bus: primary=00, secondary=09, subordinate=09, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8400000-f84fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #33, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1e000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 38
	NUMA node: 0
	Bus: primary=00, secondary=0a, subordinate=0a, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8000000-f82fffff [size=3M]
	Prefetchable memory behind bridge: 0000070080f00000-0000070080ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee08000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

00:07.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 39
	NUMA node: 0
	Bus: primary=00, secondary=0b, subordinate=0b, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8300000-f83fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0a000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap- 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0
	Kernel driver in use: piix4_smbus
	Kernel modules: i2c_piix4

00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
	Subsystem: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge
	Control: I/O+ Mem+ BusMaster+ SpecCycle+ MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	NUMA node: 0

00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ad
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ae
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14af
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b0
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b1
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b2
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b3
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

01:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 1
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 71
	NUMA node: 0
	Region 0: Memory at f8c10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8c00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08PTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-05
	Kernel driver in use: nvme
	Kernel modules: nvme

02:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 2
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 73
	NUMA node: 0
	Region 0: Memory at f8b10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8b00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08QTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-46
	Kernel driver in use: nvme
	Kernel modules: nvme

03:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 3
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 74
	NUMA node: 0
	Region 0: Memory at f8a10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8a00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08ETM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-ba-bc
	Kernel driver in use: nvme
	Kernel modules: nvme

04:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 4
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 77
	NUMA node: 0
	Region 0: Memory at f8910000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8900000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A089TM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-b9-77
	Kernel driver in use: nvme
	Kernel modules: nvme

05:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 5
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 79
	NUMA node: 0
	Region 0: Memory at f8810000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8800000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08FTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-ba-fd
	Kernel driver in use: nvme
	Kernel modules: nvme

06:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 6
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 27
	NUMA node: 0
	Region 0: Memory at f8710000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8700000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08HTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bb-7f
	Kernel driver in use: nvme
	Kernel modules: nvme

07:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 7
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 81
	NUMA node: 0
	Region 0: Memory at f8610000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8600000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08KTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-01
	Kernel driver in use: nvme
	Kernel modules: nvme

08:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 8
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 84
	NUMA node: 0
	Region 0: Memory at f8510000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8500000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08STM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-c8
	Kernel driver in use: nvme
	Kernel modules: nvme

09:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/980PRO (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd Device aa89
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 86
	NUMA node: 0
	Region 0: Memory at f8410000 (64-bit, non-prefetchable) [size=16K]
	Expansion ROM at f8400000 [disabled] [size=64K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: Upstream Port
	Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [168 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [178 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [198 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [1bc v1] Lane Margining at the Receiver <?>
	Capabilities: [3a0 v1] Data Link Feature <?>
	Kernel driver in use: nvme
	Kernel modules: nvme

0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

0a:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 70080f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 70080f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 4
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

0a:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 14c9 (rev da) (prog-if 30 [XHCI])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin C routed to IRQ 553
	NUMA node: 0
	Region 0: Memory at f8100000 (64-bit, non-prefetchable) [size=1M]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D3 NoSoftRst- PME-Enable+ DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=8 Masked-
		Vector table: BAR=0 offset=000fe000
		PBA: BAR=0 offset=000ff000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 5
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci

0a:00.5 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 14ca
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin D routed to IRQ 74
	NUMA node: 0
	Region 2: Memory at f8000000 (32-bit, non-prefetchable) [size=1M]
	Region 5: Memory at f8200000 (32-bit, non-prefetchable) [size=8K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=2 Masked-
		Vector table: BAR=5 offset=00000000
		PBA: BAR=5 offset=00001000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: ccp
	Kernel modules: ccp

0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 95
	NUMA node: 0
	Region 5: Memory at f8301000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee13000  Data: 0022
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: ahci
	Kernel modules: ahci

0b:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 97
	NUMA node: 0
	Region 5: Memory at f8300000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee15000  Data: 0022
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Kernel driver in use: ahci
	Kernel modules: ahci

40:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 41
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee0c000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 42
	NUMA node: 0
	Bus: primary=40, secondary=41, subordinate=41, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2200000-b22fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081000000-00000700811fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #9, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0e000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 43
	NUMA node: 0
	Bus: primary=40, secondary=42, subordinate=42, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2100000-b21fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081200000-00000700813fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #10, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee01000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 44
	NUMA node: 0
	Bus: primary=40, secondary=43, subordinate=43, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2000000-b20fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081400000-00000700815fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #11, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee03000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 45
	NUMA node: 0
	Bus: primary=40, secondary=44, subordinate=44, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0000000-b01fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081600000-00000700817fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #12, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee05000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 46
	NUMA node: 0
	Bus: primary=40, secondary=45, subordinate=45, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0200000-b03fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081800000-00000700819fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #13, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee07000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 47
	NUMA node: 0
	Bus: primary=40, secondary=46, subordinate=46, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0400000-b05fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081a00000-0000070081bfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #14, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee11000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 48
	NUMA node: 0
	Bus: primary=40, secondary=47, subordinate=47, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0600000-b07fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081c00000-0000070081dfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #15, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee13000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 49
	NUMA node: 0
	Bus: primary=40, secondary=48, subordinate=48, sec-latency=0
	I/O behind bridge: 00003000-00003fff [size=4K]
	Memory behind bridge: b0800000-b09fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081e00000-0000070081ffffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #16, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee15000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 51
	NUMA node: 0
	Bus: primary=40, secondary=49, subordinate=49, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: fff00000-000fffff [disabled]
	Prefetchable memory behind bridge: 0000090080f00000-0000090080ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee17000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

41:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 9
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 87
	NUMA node: 0
	Region 0: Memory at b2210000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2200000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08LTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-42
	Kernel driver in use: nvme
	Kernel modules: nvme

42:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 10
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 90
	NUMA node: 0
	Region 0: Memory at b2110000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2100000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08MTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-83
	Kernel driver in use: nvme
	Kernel modules: nvme

43:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 11
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 92
	NUMA node: 0
	Region 0: Memory at b2010000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2000000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08GTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bb-3e
	Kernel driver in use: nvme
	Kernel modules: nvme

49:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

49:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 90080f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 90080f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

80:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 53
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee19000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

80:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 54
	NUMA node: 0
	Bus: primary=80, secondary=81, subordinate=81, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f2000000-f21fffff [size=2M]
	Prefetchable memory behind bridge: 000005001a000000-000005001effffff [size=80M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (downgraded), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #9, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1b000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

80:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 55
	NUMA node: 0
	Bus: primary=80, secondary=82, subordinate=82, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f0000000-f01fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021000000-00000300211fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #17, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1d000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 56
	NUMA node: 0
	Bus: primary=80, secondary=83, subordinate=83, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f0200000-f03fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021200000-00000300213fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #18, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1f000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 57
	NUMA node: 0
	Bus: primary=80, secondary=84, subordinate=84, sec-latency=0
	I/O behind bridge: 00005000-00005fff [size=4K]
	Memory behind bridge: f0400000-f05fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021400000-00000300215fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #19, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee09000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 58
	NUMA node: 0
	Bus: primary=80, secondary=85, subordinate=85, sec-latency=0
	I/O behind bridge: 00004000-00004fff [size=4K]
	Memory behind bridge: f0600000-f07fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021600000-00000300217fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #20, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0b000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 60
	NUMA node: 0
	Bus: primary=80, secondary=86, subordinate=86, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: fff00000-000fffff [disabled]
	Prefetchable memory behind bridge: 000005001f100000-000005001f1fffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0d000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

81:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
	Subsystem: Mellanox Technologies ConnectX®-5 EN network interface card, 10/25GbE dual-port SFP28, PCIe3.0 x8, tall bracket ; MCX512A-ACAT
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 454
	NUMA node: 0
	Region 0: Memory at 5001c000000 (64-bit, prefetchable) [size=32M]
	Expansion ROM at f2100000 [disabled] [size=1M]
	Capabilities: [60] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABC, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn+
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [48] Vital Product Data
		Product Name: CX512A - ConnectX-5 SFP28
		Read-only fields:
			[PN] Part number: MCX512A-ACUT    
			[EC] Engineering changes: B7
			[V2] Vendor specific: MCX512A-ACUT    
			[SN] Serial number: MT2218K02191          
			[V3] Vendor specific: 923f71a1abc7ec1180001070fdd352f8
			[VA] Vendor specific: MLX:MODL=CX512A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
			[V0] Vendor specific: PCIeGen3 x8
			[VU] Vendor specific: MT2218K02191MLNXS0D0F0 
			[RV] Reserved: checksum good, 0 byte(s) reserved
		End
	Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [c0] Vendor Specific Information: Len=18 <?>
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot-,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 08, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number: 000
		IOVCtl:	Enable- Migration- Interrupt- MSE- ARIHierarchy+
		IOVSta:	Migration-
		Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 00
		VF offset: 2, stride: 1, Device ID: 1018
		Supported Page Size: 000007ff, System Page Size: 00000001
		Region 0: Memory at 000005001e800000 (64-bit, prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [1c0 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [230 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Kernel driver in use: mlx5_core
	Kernel modules: mlx5_core

81:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
	Subsystem: Mellanox Technologies ConnectX®-5 EN network interface card, 10/25GbE dual-port SFP28, PCIe3.0 x8, tall bracket ; MCX512A-ACAT
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 52
	NUMA node: 0
	Region 0: Memory at 5001a000000 (64-bit, prefetchable) [size=32M]
	Expansion ROM at f2000000 [disabled] [size=1M]
	Capabilities: [60] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABC, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn+
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [48] Vital Product Data
		Product Name: CX512A - ConnectX-5 SFP28
		Read-only fields:
			[PN] Part number: MCX512A-ACUT    
			[EC] Engineering changes: B7
			[V2] Vendor specific: MCX512A-ACUT    
			[SN] Serial number: MT2218K02191          
			[V3] Vendor specific: 923f71a1abc7ec1180001070fdd352f8
			[VA] Vendor specific: MLX:MODL=CX512A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
			[V0] Vendor specific: PCIeGen3 x8
			[VU] Vendor specific: MT2218K02191MLNXS0D0F1 
			[RV] Reserved: checksum good, 0 byte(s) reserved
		End
	Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [c0] Vendor Specific Information: Len=18 <?>
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot-,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 08, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number: 000
		IOVCtl:	Enable- Migration- Interrupt- MSE- ARIHierarchy-
		IOVSta:	Migration-
		Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 01
		VF offset: 9, stride: 1, Device ID: 1018
		Supported Page Size: 000007ff, System Page Size: 00000001
		Region 0: Memory at 000005001e000000 (64-bit, prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [230 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Kernel driver in use: mlx5_core
	Kernel modules: mlx5_core

86:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

86:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 5001f180000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 5001f100000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

c0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 62
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee0f000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

c0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 63
	NUMA node: 0
	Bus: primary=c0, secondary=c1, subordinate=c1, sec-latency=0
	I/O behind bridge: 00006000-00006fff [size=4K]
	Memory behind bridge: ba000000-ba1fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021000000-00000100211fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #21, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee00000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 64
	NUMA node: 0
	Bus: primary=c0, secondary=c2, subordinate=c2, sec-latency=0
	I/O behind bridge: 00007000-00007fff [size=4K]
	Memory behind bridge: ba200000-ba3fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021200000-00000100213fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #22, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee02000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 65
	NUMA node: 0
	Bus: primary=c0, secondary=c3, subordinate=c3, sec-latency=0
	I/O behind bridge: 00008000-00008fff [size=4K]
	Memory behind bridge: ba400000-ba5fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021400000-00000100215fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #23, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee04000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 66
	NUMA node: 0
	Bus: primary=c0, secondary=c4, subordinate=c4, sec-latency=0
	I/O behind bridge: 00009000-00009fff [size=4K]
	Memory behind bridge: ba600000-ba7fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021600000-00000100217fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #24, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee06000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:05.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 67
	NUMA node: 0
	Bus: primary=c0, secondary=c5, subordinate=c6, sec-latency=0
	I/O behind bridge: 0000f000-0000ffff [size=4K]
	Memory behind bridge: bc000000-bd0fffff [size=17M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 2.5GT/s, Width x1, ASPM not supported
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #0, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee10000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 1453
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [370 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
		L1SubCtl2:
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:05.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 68
	NUMA node: 0
	Bus: primary=c0, secondary=c7, subordinate=c7, sec-latency=0
	I/O behind bridge: 0000e000-0000efff [size=4K]
	Memory behind bridge: bd400000-bd4fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 2.5GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #0, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee12000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 1453
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [370 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
		L1SubCtl2:
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 69
	NUMA node: 0
	Bus: primary=c0, secondary=c8, subordinate=c8, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: bd300000-bd3fffff [size=1M]
	Prefetchable memory behind bridge: 0000030020f00000-0000030020ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee14000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

c0:07.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 70
	NUMA node: 0
	Bus: primary=c0, secondary=c9, subordinate=c9, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: bd200000-bd2fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee16000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

c5:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 06) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 26
	NUMA node: 0
	Bus: primary=c5, secondary=c6, subordinate=c6, sec-latency=32
	I/O behind bridge: 0000f000-0000ffff [size=4K]
	Memory behind bridge: bc000000-bd0fffff [size=17M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz+ FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [78] Power Management version 3
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [80] Express (v2) PCI-Express to PCI/PCI-X Bridge, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ BrConfRtry-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Subsystem: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge
	Capabilities: [100 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=01
			Status:	NegoPending- InProgress-
	Capabilities: [200 v1] Designated Vendor-Specific: Vendor=8086 ID=003e Rev=1 Len=80 <?>
	Capabilities: [2c0 v1] Designated Vendor-Specific: Vendor=8086 ID=002e Rev=1 Len=36 <?>
	Capabilities: [800 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000

c6:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 52) (prog-if 00 [VGA controller])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 26
	NUMA node: 0
	Region 0: Memory at bc000000 (32-bit, non-prefetchable) [size=16M]
	Region 1: Memory at bd000000 (32-bit, non-prefetchable) [size=256K]
	Region 2: I/O ports at f000 [size=128]
	Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Kernel driver in use: ast
	Kernel modules: ast

c7:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 571
	NUMA node: 0
	Region 0: Memory at bd400000 (32-bit, non-prefetchable) [size=512K]
	Region 2: I/O ports at e000 [size=32]
	Region 3: Memory at bd480000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
		Vector table: BAR=3 offset=00000000
		PBA: BAR=3 offset=00002000
	Capabilities: [a0] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
		LnkCap:	Port #2, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [140 v1] Device Serial Number 74-56-3c-ff-ff-d7-92-48
	Capabilities: [1a0 v1] Transaction Processing Hints
		Device specific mode supported
		Steering table in TPH capability structure
	Kernel driver in use: igb
	Kernel modules: igb

c8:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

c8:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 30020f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 30020f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 4
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

c8:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 14c9 (rev da) (prog-if 30 [XHCI])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin C routed to IRQ 562
	NUMA node: 0
	Region 0: Memory at bd300000 (64-bit, non-prefetchable) [size=1M]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=8 Masked-
		Vector table: BAR=0 offset=000fe000
		PBA: BAR=0 offset=000ff000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci

c9:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 420
	NUMA node: 0
	Region 5: Memory at bd201000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee01000  Data: 002c
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: ahci
	Kernel modules: ahci

c9:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 453
	NUMA node: 0
	Region 5: Memory at bd200000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee11000  Data: 002d
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Kernel driver in use: ahci
	Kernel modules: ahci


[-- Attachment #5: Type: text/plain, Size: 3508 bytes --]



Here’s the block device structure:

nvme1n1                     259:0    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme0n1                     259:1    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme4n1                     259:2    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme2n1                     259:3    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme10n1                    259:4    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme11n1                    259:5    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme6n1                     259:6    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme5n1                     259:7    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme3n1                     259:8    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme7n1                     259:9    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme9n1                     259:10   0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
  └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
    └─backy                 253:4    0 111.8T  0 crypt /srv/backy

And the software raid setup:

Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 nvme5n1[5] nvme7n1[7] nvme4n1[4] nvme3n1[3] nvme11n1[10](S) nvme0n1[0] nvme1n1[1] nvme2n1[2] nvme6n1[6] nvme9n1[8] nvme10n1[9]
      120006369280 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      bitmap: 1/112 pages [4KB], 65536KB chunk

unused devices: <none>

Let me know if I can provide any other info that might help diagnosing this,

Thanks and hugs,
Christian 

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-06 14:10 PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files Christian Theune
@ 2024-08-06 14:10 ` Christian Theune
  2024-08-07  2:55 ` Yu Kuai
  1 sibling, 0 replies; 88+ messages in thread
From: Christian Theune @ 2024-08-06 14:10 UTC (permalink / raw)
  To: linux-raid, dm-devel

[-- Attachment #1: Type: text/plain, Size: 1517 bytes --]

(Note, I screwed up the original cross-post with linux-raid. I’m not sure what the best option forward is to avoid confusion. I tried using a forwarded mail, I’m now resorting to sending the original mail again, maybe this is suppressed on linux-raid but goes through on dm-devel and will keep answers/threads working. I’m happy for advice about fixing errors like this in the future)

Hi,

we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.

Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.

I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.

Kernel:

Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023

See the kernel config attached.

[-- Attachment #2: config.gz --]
[-- Type: application/x-gzip, Size: 59076 bytes --]

[-- Attachment #3: Type: text/plain, Size: 35998 bytes --]



I can’t say whether there’s an earlier version that worked as this specific config is relatively new: we switched to dm-crypt about half a year ago and haven’t seen any issues. We’ve been using NVMe drives all over the place and we’ve been using MD raid for a long time. MD + dm-crypt has also been running for a while now without this - the new thing we introduced is that the overall combination of MD + dm-crypt + NVMe drives is new.

Here’s the output I’m seeing:

[Aug 6 13:17] INFO: task kworker/u64:4:433 blocked for more than 122 seconds.
[  +0.007829]       Not tainted 5.15.138 #1-NixOS
[  +0.005085] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008801] task:kworker/u64:4   state:D stack:    0 pid:  433 ppid:     2 flags:0x00004000
[  +0.000005] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000009] Call Trace:
[  +0.000001]  <TASK>
[  +0.000002]  __schedule+0x373/0x1580
[  +0.000006]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000003]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000005]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000005]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000003]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000002]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000003]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000002]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>
[  +0.000001] INFO: task kworker/u64:6:435 blocked for more than 122 seconds.
[  +0.007823]       Not tainted 5.15.138 #1-NixOS
[  +0.005092] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:6   state:D stack:    0 pid:  435 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  ? sysvec_call_function_single+0xa/0x90
[  +0.000001]  ? asm_sysvec_call_function_single+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000000] INFO: task kworker/u64:7:436 blocked for more than 122 seconds.
[  +0.007829]       Not tainted 5.15.138 #1-NixOS
[  +0.005095] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:7   state:D stack:    0 pid:  436 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000000]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  ? __blk_queue_split+0x2d0/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50 
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:11:440 blocked for more than 122 seconds.
[  +0.007923]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008800] task:kworker/u64:11  state:D stack:    0 pid:  440 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? __bio_clone_fast+0xa5/0xe0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x2d0/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000000]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:13:442 blocked for more than 122 seconds.
[  +0.007919]       Not tainted 5.15.138 #1-NixOS
[  +0.005094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008799] task:kworker/u64:13  state:D stack:    0 pid:  442 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000002] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000001]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000001]  ? finish_wait+0x90/0x90
[  +0.000001]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000004]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000001]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000002]  ? set_kthread_struct+0x50/0x50
[  +0.000000]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>
[  +0.000004] INFO: task kworker/u64:14:443 blocked for more than 122 seconds.
[  +0.007923]       Not tainted 5.15.138 #1-NixOS
[  +0.005096] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008803] task:kworker/u64:14  state:D stack:    0 pid:  443 ppid:     2 flags:0x00004000
[  +0.000004] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000006]  schedule+0x5b/0xe0
[  +0.000001]  raid5_get_active_stripe+0x50c/0x6c0 [raid456]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000002]  raid5_make_request+0x18c/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000002]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:15:444 blocked for more than 123 seconds.
[  +0.007922]       Not tainted 5.15.138 #1-NixOS
[  +0.005096] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008802] task:kworker/u64:15  state:D stack:    0 pid:  444 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000002] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000003]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000003]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000001]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000002]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:17:446 blocked for more than 123 seconds.
[  +0.007922]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008800] task:kworker/u64:17  state:D stack:    0 pid:  446 ppid:     2 flags:0x00004000
[  +0.000002] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000000]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000001]  ? finish_wait+0x90/0x90
[  +0.000002]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000003]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000002]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000004]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000002]  md_handle_request+0x122/0x1b0
[  +0.000001]  md_submit_bio+0x6e/0xb0
[  +0.000002]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000001]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000002]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:18:447 blocked for more than 123 seconds.
[  +0.007919]       Not tainted 5.15.138 #1-NixOS
[  +0.005094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008803] task:kworker/u64:18  state:D stack:    0 pid:  447 ppid:     2 flags:0x00004000
[  +0.000004] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[  +0.000042] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000004]  ? dm_submit_bio+0x1e8/0x380 [dm_mod]
[  +0.000004]  schedule+0x5b/0xe0
[  +0.000001]  xlog_state_get_iclog_space+0x10d/0x320 [xfs]
[  +0.000028]  ? wake_up_q+0x90/0x90
[  +0.000002]  xlog_write+0x148/0x6c0 [xfs]
[  +0.000024]  xlog_cil_push_work+0x37f/0x4c0 [xfs]
[  +0.000025]  process_one_work+0x1d6/0x360
[  +0.000001]  worker_thread+0x4d/0x3b0
[  +0.000002]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000002]  </TASK>
[  +0.000001] INFO: task kworker/u64:20:449 blocked for more than 123 seconds.
[  +0.007921]       Not tainted 5.15.138 #1-NixOS
[  +0.005093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0.008802] task:kworker/u64:20  state:D stack:    0 pid:  449 ppid:     2 flags:0x00004000
[  +0.000003] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[  +0.000003] Call Trace:
[  +0.000001]  <TASK>
[  +0.000001]  __schedule+0x373/0x1580
[  +0.000003]  ? sysvec_apic_timer_interrupt+0xa/0x90
[  +0.000002]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[  +0.000002]  schedule+0x5b/0xe0
[  +0.000001]  md_bitmap_startwrite+0x177/0x1e0
[  +0.000002]  ? finish_wait+0x90/0x90
[  +0.000003]  add_stripe_bio+0x449/0x770 [raid456]
[  +0.000004]  raid5_make_request+0x1cf/0xbd0 [raid456]
[  +0.000002]  ? kmem_cache_alloc_node_trace+0x391/0x3e0
[  +0.000004]  ? linear_map+0x44/0x90 [dm_mod]
[  +0.000003]  ? finish_wait+0x90/0x90
[  +0.000001]  ? __blk_queue_split+0x516/0x580
[  +0.000003]  md_handle_request+0x122/0x1b0
[  +0.000002]  md_submit_bio+0x6e/0xb0
[  +0.000001]  __submit_bio+0x18f/0x220
[  +0.000001]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[  +0.000001]  submit_bio_noacct+0xbe/0x2d0
[  +0.000002]  kcryptd_crypt+0x392/0x550 [dm_crypt]
[  +0.000001]  process_one_work+0x1d6/0x360
[  +0.000002]  worker_thread+0x4d/0x3b0
[  +0.000001]  ? process_one_work+0x360/0x360
[  +0.000001]  kthread+0x118/0x140
[  +0.000001]  ? set_kthread_struct+0x50/0x50
[  +0.000001]  ret_from_fork+0x22/0x30
[  +0.000003]  </TASK>

Environment:

CPU:

processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 25
model           : 17
model name      : AMD EPYC 9124 16-Core Processor
stepping        : 1
microcode       : 0xa101144
cpu MHz         : 3000.000
cache size      : 1024 KB
physical id     : 0
siblings        : 32
core id         : 0
cpu cores       : 16
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 16
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
bugs            : sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass srso
bogomips        : 6000.04
TLB size        : 3584 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 52 bits physical, 57 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

Modules:

tls 118784 0 - Live 0xffffffffc16af000
dm_crypt 61440 1 - Live 0xffffffffc169f000
cbc 16384 0 - Live 0xffffffffc1697000
encrypted_keys 24576 1 dm_crypt, Live 0xffffffffc168c000
trusted 40960 2 dm_crypt,encrypted_keys, Live 0xffffffffc167c000
asn1_encoder 16384 1 trusted, Live 0xffffffffc1675000
tee 36864 1 trusted, Live 0xffffffffc1666000
tpm 94208 1 trusted, Live 0xffffffffc1646000
af_packet 57344 10 - Live 0xffffffffc161e000
vxlan 81920 0 - Live 0xffffffffc1585000
ip6_udp_tunnel 16384 1 vxlan, Live 0xffffffffc157a000
udp_tunnel 20480 1 vxlan, Live 0xffffffffc1570000
dummy 16384 0 - Live 0xffffffffc1275000
msr 16384 0 - Live 0xffffffffc1188000
ext4 937984 1 - Live 0xffffffffc1476000
crc16 16384 1 ext4, Live 0xffffffffc0f5b000
mbcache 16384 1 ext4, Live 0xffffffffc0ee6000
jbd2 167936 1 ext4, Live 0xffffffffc1146000
edac_mce_amd 36864 0 - Live 0xffffffffc170a000
edac_core 69632 0 - Live 0xffffffffc1253000
intel_rapl_msr 20480 0 - Live 0xffffffffc12ff000
intel_rapl_common 28672 1 intel_rapl_msr, Live 0xffffffffc108c000
kvm_amd 147456 0 - Live 0xffffffffc11e0000
sd_mod 57344 1 - Live 0xffffffffc11d1000
kvm 1056768 1 kvm_amd, Live 0xffffffffc1373000
irqbypass 16384 1 kvm, Live 0xffffffffc1311000
crc32_pclmul 16384 0 - Live 0xffffffffc0f29000
ghash_clmulni_intel 16384 0 - Live 0xffffffffc0f1f000
ipmi_ssif 45056 0 - Live 0xffffffffc12ea000
aesni_intel 380928 2 - Live 0xffffffffc127e000
libaes 16384 1 aesni_intel, Live 0xffffffffc120c000
crypto_simd 16384 1 aesni_intel, Live 0xffffffffc1369000
cryptd 28672 3 ghash_clmulni_intel,crypto_simd, Live 0xffffffffc12f7000
rapl 16384 0 - Live 0xffffffffc0ebb000
ast 57344 0 - Live 0xffffffffc1064000
drm_vram_helper 24576 1 ast, Live 0xffffffffc0ff5000
drm_ttm_helper 16384 2 ast,drm_vram_helper, Live 0xffffffffc0f40000
ttm 86016 2 drm_vram_helper,drm_ttm_helper, Live 0xffffffffc11bb000
mlx5_ib 397312 0 - Live 0xffffffffc10e4000
drm_kms_helper 311296 4 ast,drm_vram_helper, Live 0xffffffffc1097000
uas 32768 1 - Live 0xffffffffc1040000
usb_storage 81920 1 uas, Live 0xffffffffc1077000
ib_uverbs 172032 1 mlx5_ib, Live 0xffffffffc1009000
agpgart 40960 1 ttm, Live 0xffffffffc0ffe000
fb_sys_fops 16384 1 drm_kms_helper, Live 0xffffffffc0ee1000
syscopyarea 16384 1 drm_kms_helper, Live 0xffffffffc0eda000
sysfillrect 16384 1 drm_kms_helper, Live 0xffffffffc0ecf000
ib_core 421888 2 mlx5_ib,ib_uverbs, Live 0xffffffffc0f7a000
sysimgblt 16384 1 drm_kms_helper, Live 0xffffffffc0eca000
ccp 114688 1 kvm_amd, Live 0xffffffffc0f02000
i2c_piix4 28672 0 - Live 0xffffffffc0f72000
rng_core 16384 2 tpm,ccp, Live 0xffffffffc0f69000
acpi_ipmi 20480 0 - Live 0xffffffffc0f63000
ipmi_si 73728 2 - Live 0xffffffffc0f48000
pinctrl_amd 32768 0 - Live 0xffffffffc0ef9000
i2c_designware_platform 16384 0 - Live 0xffffffffc0ec5000
8250_dw 16384 0 - Live 0xffffffffc0ef0000
i2c_designware_core 36864 1 i2c_designware_platform, Live 0xffffffffc0eaf000
acpi_cpufreq 28672 0 - Live 0xffffffffc0ea3000
xt_limit 16384 2 - Live 0xffffffffc0e9b000
xt_CT 16384 6 - Live 0xffffffffc0e7d000
ip6table_nat 16384 1 - Live 0xffffffffc0e78000
iptable_nat 16384 1 - Live 0xffffffffc0e73000
xt_conntrack 16384 2 - Live 0xffffffffc0e6e000
ip6t_rpfilter 16384 1 - Live 0xffffffffc0e66000
ipt_rpfilter 16384 1 - Live 0xffffffffc0e61000
ip6table_raw 16384 1 - Live 0xffffffffc0e5c000
iptable_raw 16384 1 - Live 0xffffffffc0e54000
xt_pkttype 16384 4 - Live 0xffffffffc0e4c000
xt_LOG 20480 4 - Live 0xffffffffc0e43000
nf_log_syslog 24576 4 - Live 0xffffffffc0e39000
ip6t_REJECT 16384 2 - Live 0xffffffffc0e31000
nf_reject_ipv6 20480 1 ip6t_REJECT, Live 0xffffffffc0e29000
ipt_REJECT 16384 2 - Live 0xffffffffc0e1d000
nf_reject_ipv4 16384 1 ipt_REJECT, Live 0xffffffffc0e18000
tcp_bbr 20480 30 - Live 0xffffffffc0e0f000
xt_tcpudp 20480 13 - Live 0xffffffffc0e06000
sch_fq_codel 20480 69 - Live 0xffffffffc0ddf000
ip6table_filter 16384 1 - Live 0xffffffffc0da7000
ip6_tables 36864 5 ip6table_nat,ip6table_raw,ip6table_filter, Live 0xffffffffc0d57000
iptable_filter 16384 1 - Live 0xffffffffc08df000
nf_nat_ftp 20480 0 - Live 0xffffffffc0e00000
nf_conntrack_ftp 24576 1 nf_nat_ftp, Live 0xffffffffc0df9000
nf_nat 57344 3 ip6table_nat,iptable_nat,nf_nat_ftp, Live 0xffffffffc0dea000
nf_conntrack 167936 5 xt_CT,xt_conntrack,nf_nat_ftp,nf_conntrack_ftp,nf_nat, Live 0xffffffffc0db3000
nf_defrag_ipv6 24576 1 nf_conntrack, Live 0xffffffffc0d50000
nf_defrag_ipv4 16384 1 nf_conntrack, Live 0xffffffffc0dae000
atkbd 40960 0 - Live 0xffffffffc0d9c000
libps2 20480 1 atkbd, Live 0xffffffffc0d96000
serio 28672 1 atkbd, Live 0xffffffffc0d8e000
loop 40960 0 - Live 0xffffffffc0d83000
tun 61440 0 - Live 0xffffffffc0d73000
tap 28672 0 - Live 0xffffffffc0d6b000
macvlan 28672 0 - Live 0xffffffffc0d63000
bridge 303104 0 - Live 0xffffffffc0d05000
stp 16384 1 bridge, Live 0xffffffffc0cfc000
llc 16384 2 bridge,stp, Live 0xffffffffc0cf4000
mq_deadline 32768 1 - Live 0xffffffffc0ce7000
ipmi_watchdog 32768 1 - Live 0xffffffffc0cde000
ipmi_devintf 20480 0 - Live 0xffffffffc0cd8000
ipmi_msghandler 73728 5 ipmi_ssif,acpi_ipmi,ipmi_si,ipmi_watchdog,ipmi_devintf, Live 0xffffffffc0747000
rbd 122880 0 - Live 0xffffffffc0841000
libceph 462848 1 rbd, Live 0xffffffffc086b000
drm 598016 6 ast,drm_vram_helper,drm_ttm_helper,ttm,drm_kms_helper, Live 0xffffffffc0c45000
fuse 155648 1 - Live 0xffffffffc070a000
backlight 24576 2 drm_kms_helper,drm, Live 0xffffffffc0703000
configfs 57344 1 - Live 0xffffffffc06ec000
ip_tables 36864 4 iptable_nat,iptable_raw,iptable_filter, Live 0xffffffffc06e2000
x_tables 53248 18 xt_limit,xt_CT,ip6table_nat,iptable_nat,xt_conntrack,ip6t_rpfilter,ipt_rpfilter,ip6table_raw,iptable_raw,xt_pkttype,xt_LOG,ip6t_REJECT,ipt_REJECT,xt_tcpudp,ip6table_filter,ip6_tables,iptable_filter,ip_tables, Live 0xffffffffc06cd000
autofs4 53248 0 - Live 0xffffffffc057f000
xfs 2052096 4 - Live 0xffffffffc0a4f000
raid456 184320 1 - Live 0xffffffffc062e000
async_raid6_recov 24576 1 raid456, Live 0xffffffffc0627000
async_memcpy 20480 2 raid456,async_raid6_recov, Live 0xffffffffc0621000
async_pq 20480 2 raid456,async_raid6_recov, Live 0xffffffffc0618000
async_xor 20480 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc0612000
xor 24576 1 async_xor, Live 0xffffffffc0607000
async_tx 20480 5 raid456,async_raid6_recov,async_memcpy,async_pq,async_xor, Live 0xffffffffc0601000
raid6_pq 122880 3 raid456,async_raid6_recov,async_pq, Live 0xffffffffc0594000
libcrc32c 16384 5 nf_nat,nf_conntrack,libceph,xfs,raid456, Live 0xffffffffc057a000
crc32c_generic 16384 0 - Live 0xffffffffc0509000
igb 270336 0 - Live 0xffffffffc05be000
xhci_pci 24576 0 - Live 0xffffffffc0573000
i2c_algo_bit 16384 2 ast,igb, Live 0xffffffffc04fd000
xhci_pci_renesas 20480 1 xhci_pci, Live 0xffffffffc04e4000
crc32c_intel 24576 3 - Live 0xffffffffc0864000
mlx5_core 1400832 1 mlx5_ib, Live 0xffffffffc08f8000
xhci_hcd 315392 1 xhci_pci, Live 0xffffffffc07d2000
ahci 49152 0 - Live 0xffffffffc0524000
libahci 49152 1 ahci, Live 0xffffffffc08eb000
usbcore 339968 4 uas,usb_storage,xhci_pci,xhci_hcd, Live 0xffffffffc077e000
libata 307200 2 ahci,libahci, Live 0xffffffffc0681000
i2c_core 106496 9 ipmi_ssif,ast,drm_kms_helper,i2c_piix4,i2c_designware_platform,i2c_designware_core,drm,igb,i2c_algo_bit, Live 0xffffffffc0666000
nvme 49152 13 - Live 0xffffffffc0566000
mlxfw 36864 1 mlx5_core, Live 0xffffffffc0774000
dca 16384 1 igb, Live 0xffffffffc065d000
pci_hyperv_intf 16384 1 mlx5_core, Live 0xffffffffc04f3000
psample 20480 1 mlx5_core, Live 0xffffffffc05b8000
nvme_core 139264 15 nvme, Live 0xffffffffc0543000
ptp 32768 2 igb,mlx5_core, Live 0xffffffffc049b000
pps_core 24576 1 ptp, Live 0xffffffffc0538000
t10_pi 16384 2 sd_mod,nvme_core, Live 0xffffffffc0517000
usb_common 16384 2 xhci_hcd,usbcore, Live 0xffffffffc050f000
crc_t10dif 20480 1 t10_pi, Live 0xffffffffc051e000
crct10dif_generic 16384 0 - Live 0xffffffffc04f8000
crct10dif_pclmul 16384 1 - Live 0xffffffffc04eb000
crct10dif_common 16384 3 crc_t10dif,crct10dif_generic,crct10dif_pclmul, Live 0xffffffffc0496000
rtc_cmos 28672 1 - Live 0xffffffffc041b000
dm_mod 155648 15 dm_crypt, Live 0xffffffffc04bd000
bfq 90112 0 - Live 0xffffffffc04a6000
mpt3sas 348160 0 - Live 0xffffffffc0440000
raid_class 16384 1 mpt3sas, Live 0xffffffffc0438000
scsi_transport_sas 49152 1 mpt3sas, Live 0xffffffffc0425000
megaraid_sas 180224 0 - Live 0xffffffffc03ee000
scsi_mod 274432 8 sd_mod,uas,usb_storage,libata,mpt3sas,raid_class,scsi_transport_sas,megaraid_sas, Live 0xffffffffc03aa000
scsi_common 16384 5 uas,usb_storage,libata,mpt3sas,scsi_mod, Live 0xffffffffc03a5000

Driver/Hardware Info:

0000-02ff : PCI Bus 0000:00
 0000-001f : dma1
 0020-0021 : pic1
 0040-0043 : timer0
 0050-0053 : timer1
 0060-0060 : keyboard
 0061-0061 : PNP0800:00
 0064-0064 : keyboard
 0070-0071 : rtc0
 0080-008f : dma page reg
 00a0-00a1 : pic2
 00b2-00b2 : APEI ERST
 00c0-00df : dma2
 00f0-00ff : fpu
0300-03af : PCI Bus 0000:00
03b0-03df : PCI Bus 0000:c0
 03c0-03df : vga+
03e0-0cf7 : PCI Bus 0000:00
 03f8-03ff : serial
 040b-040b : pnp 00:04
 04d0-04d1 : pnp 00:04
 04d6-04d6 : pnp 00:04
 0800-089f : pnp 00:04
   0800-0803 : ACPI PM1a_EVT_BLK
   0804-0805 : ACPI PM1a_CNT_BLK
   0808-080b : ACPI PM_TMR
   0820-0827 : ACPI GPE0_BLK
 0900-090f : pnp 00:04
 0910-091f : pnp 00:04
 0a00-0a3f : pnp 00:02
 0a40-0a5f : pnp 00:02
 0a60-0a6f : pnp 00:02
 0a70-0a7f : pnp 00:02
 0a80-0a8f : pnp 00:02
 0b00-0b0f : pnp 00:04
   0b00-0b08 : piix4_smbus
 0b20-0b3f : pnp 00:04
   0b20-0b28 : piix4_smbus
 0c00-0c01 : pnp 00:04
 0c14-0c14 : pnp 00:04
 0c50-0c51 : pnp 00:04
 0c52-0c52 : pnp 00:04
 0c6c-0c6c : pnp 00:04
 0c6f-0c6f : pnp 00:04
 0ca2-0ca2 : IPI0001:00
   0ca2-0ca2 : IPMI Address 1
     0ca2-0ca2 : ipmi_si
 0ca3-0ca3 : IPI0001:00
   0ca3-0ca3 : IPMI Address 2
     0ca3-0ca3 : ipmi_si
 0cd0-0cd1 : pnp 00:04
 0cd2-0cd3 : pnp 00:04
 0cd4-0cd5 : pnp 00:04
 0cd6-0cd7 : pnp 00:04
 0cd8-0cdf : pnp 00:04
0cf8-0cff : PCI conf1
1000-2fff : PCI Bus 0000:00
 1000-1fff : PCI Bus 0000:08
 2000-2fff : PCI Bus 0000:07
3000-3fff : PCI Bus 0000:40
 3000-3fff : PCI Bus 0000:48
4000-5fff : PCI Bus 0000:80
 4000-4fff : PCI Bus 0000:85
 5000-5fff : PCI Bus 0000:84
6000-ffff : PCI Bus 0000:c0
 6000-6fff : PCI Bus 0000:c1
 7000-7fff : PCI Bus 0000:c2
 8000-8fff : PCI Bus 0000:c3
 9000-9fff : PCI Bus 0000:c4
 e000-efff : PCI Bus 0000:c7
   e000-e01f : 0000:c7:00.0
 f000-ffff : PCI Bus 0000:c5
   f000-ffff : PCI Bus 0000:c6
     f000-f07f : 0000:c6:00.0

00000000-00000fff : Reserved
00001000-0008abff : System RAM
0008ac00-0008ffff : RAM buffer
000a0000-000bffff : PCI Bus 0000:c0
000c0000-000dffff : PCI Bus 0000:00
 000c0000-000c7fff : Video ROM
 000cb000-000cbfff : Adapter ROM
000e0000-000fffff : Reserved
 000f0000-000fffff : System ROM
00100000-75b9dfff : System RAM
75b9e000-75b9ffff : RAM buffer
75ba0000-75be3fff : ACPI Non-volatile Storage
75be4000-75c9ffff : System RAM
75ca0000-75ffffff : Reserved
76000000-9df7ffff : System RAM
9df80000-a34e6fff : Reserved
 a249d018-a249d019 : APEI ERST
 a249d01c-a249d021 : APEI ERST
 a249d028-a249d039 : APEI ERST
 a249d040-a249d04c : APEI ERST
 a249d050-a249f04f : APEI ERST
a34e7000-a35d1fff : ACPI Tables
a35d2000-a3a52fff : ACPI Non-volatile Storage
a3a53000-a63fefff : Reserved
a63ff000-a7f54fff : System RAM
a7f55000-a7ffcfff : RAM buffer
a7ffd000-afffffff : Reserved
b0000000-b22fffff : PCI Bus 0000:40
 b0000000-b01fffff : PCI Bus 0000:44
 b0200000-b03fffff : PCI Bus 0000:45
 b0400000-b05fffff : PCI Bus 0000:46
 b0600000-b07fffff : PCI Bus 0000:47
 b0800000-b09fffff : PCI Bus 0000:48
 b2000000-b20fffff : PCI Bus 0000:43
   b2000000-b200ffff : 0000:43:00.0
   b2010000-b2017fff : 0000:43:00.0
     b2010000-b2017fff : nvme
 b2100000-b21fffff : PCI Bus 0000:42
   b2100000-b210ffff : 0000:42:00.0
   b2110000-b2117fff : 0000:42:00.0
     b2110000-b2117fff : nvme
 b2200000-b22fffff : PCI Bus 0000:41
   b2200000-b220ffff : 0000:41:00.0
   b2210000-b2217fff : 0000:41:00.0
     b2210000-b2217fff : nvme
b8180000-b8180fff : Reserved
 b8180000-b81803ff : IOAPIC 4
b9200000-b92fffff : Reserved
b9410500-b94fffff : AMDI0095:00
b9500000-b9500fff : Reserved
 b9500000-b95003ff : IOAPIC 1
ba000000-bddfffff : PCI Bus 0000:c0
 ba000000-ba1fffff : PCI Bus 0000:c1
 ba200000-ba3fffff : PCI Bus 0000:c2
 ba400000-ba5fffff : PCI Bus 0000:c3
 ba600000-ba7fffff : PCI Bus 0000:c4
 bc000000-bd0fffff : PCI Bus 0000:c5
   bc000000-bd0fffff : PCI Bus 0000:c6
     bc000000-bcffffff : 0000:c6:00.0
     bd000000-bd03ffff : 0000:c6:00.0
 bd200000-bd2fffff : PCI Bus 0000:c9
   bd200000-bd2007ff : 0000:c9:00.1
     bd200000-bd2007ff : ahci
   bd201000-bd2017ff : 0000:c9:00.0
     bd201000-bd2017ff : ahci
 bd300000-bd3fffff : PCI Bus 0000:c8
   bd300000-bd3fffff : 0000:c8:00.4
     bd300000-bd3fffff : xhci-hcd
 bd400000-bd4fffff : PCI Bus 0000:c7
   bd400000-bd47ffff : 0000:c7:00.0
     bd400000-bd47ffff : igb
   bd480000-bd483fff : 0000:c7:00.0
     bd480000-bd483fff : igb
e0000000-efffffff : PCI MMCONFIG 0000 [bus 00-ff]
 e0000000-efffffff : Reserved
   e0000000-efffffff : pnp 00:00
f0000000-f21fffff : PCI Bus 0000:80
 f0000000-f01fffff : PCI Bus 0000:82
 f0200000-f03fffff : PCI Bus 0000:83
 f0400000-f05fffff : PCI Bus 0000:84
 f0600000-f07fffff : PCI Bus 0000:85
 f2000000-f21fffff : PCI Bus 0000:81
   f2000000-f20fffff : 0000:81:00.1
   f2100000-f21fffff : 0000:81:00.0
f4180000-f4180fff : Reserved
 f4180000-f41803ff : IOAPIC 2
f5180000-f5180fff : Reserved
 f5180000-f51803ff : IOAPIC 3
f6000000-f8cfffff : PCI Bus 0000:00
 f8000000-f82fffff : PCI Bus 0000:0a
   f8000000-f80fffff : 0000:0a:00.5
     f8000000-f80fffff : ccp
   f8100000-f81fffff : 0000:0a:00.4
     f8100000-f81fffff : xhci-hcd
   f8200000-f8201fff : 0000:0a:00.5
     f8200000-f8201fff : ccp
 f8300000-f83fffff : PCI Bus 0000:0b
   f8300000-f83007ff : 0000:0b:00.1
     f8300000-f83007ff : ahci
   f8301000-f83017ff : 0000:0b:00.0
     f8301000-f83017ff : ahci
 f8400000-f84fffff : PCI Bus 0000:09
   f8400000-f840ffff : 0000:09:00.0
   f8410000-f8413fff : 0000:09:00.0
     f8410000-f8413fff : nvme
 f8500000-f85fffff : PCI Bus 0000:08
   f8500000-f850ffff : 0000:08:00.0
   f8510000-f8517fff : 0000:08:00.0
     f8510000-f8517fff : nvme
 f8600000-f86fffff : PCI Bus 0000:07
   f8600000-f860ffff : 0000:07:00.0
   f8610000-f8617fff : 0000:07:00.0
     f8610000-f8617fff : nvme
 f8700000-f87fffff : PCI Bus 0000:06
   f8700000-f870ffff : 0000:06:00.0
   f8710000-f8717fff : 0000:06:00.0
     f8710000-f8717fff : nvme
 f8800000-f88fffff : PCI Bus 0000:05
   f8800000-f880ffff : 0000:05:00.0
   f8810000-f8817fff : 0000:05:00.0
     f8810000-f8817fff : nvme
 f8900000-f89fffff : PCI Bus 0000:04
   f8900000-f890ffff : 0000:04:00.0
   f8910000-f8917fff : 0000:04:00.0
     f8910000-f8917fff : nvme
 f8a00000-f8afffff : PCI Bus 0000:03
   f8a00000-f8a0ffff : 0000:03:00.0
   f8a10000-f8a17fff : 0000:03:00.0
     f8a10000-f8a17fff : nvme
 f8b00000-f8bfffff : PCI Bus 0000:02
   f8b00000-f8b0ffff : 0000:02:00.0
   f8b10000-f8b17fff : 0000:02:00.0
     f8b10000-f8b17fff : nvme
 f8c00000-f8cfffff : PCI Bus 0000:01
   f8c00000-f8c0ffff : 0000:01:00.0
   f8c10000-f8c17fff : 0000:01:00.0
     f8c10000-f8c17fff : nvme
fea00000-feafffff : Reserved
fec00000-fec00fff : Reserved
 fec00000-fec003ff : IOAPIC 0
fec10000-fec10fff : Reserved
 fec10000-fec10fff : pnp 00:04
fed00000-fed00fff : Reserved
 fed00000-fed003ff : HPET 0
fed40000-fed44fff : Reserved
fed80000-fed8ffff : Reserved
 fed80000-fed814ff : pnp 00:04
 fed81500-fed818ff : AMDI0030:00
 fed81900-fed8ffff : pnp 00:04
fedc0000-fedc0fff : Reserved
 fedc0000-fedc0fff : pnp 00:04
fedc2000-fedc8fff : Reserved
 fedc2000-fedc2fff : AMDI0010:00
   fedc2000-fedc2fff : AMDI0010:00 AMDI0010:00
 fedc3000-fedc3fff : AMDI0010:01
   fedc3000-fedc3fff : AMDI0010:01 AMDI0010:01
 fedc4000-fedc4fff : AMDI0010:02
   fedc4000-fedc4fff : AMDI0010:02 AMDI0010:02
 fedc5000-fedc5fff : AMDI0010:03
   fedc5000-fedc5fff : AMDI0010:03 AMDI0010:03
 fedc7000-fedc7fff : AMDI0020:00
 fedc8000-fedc8fff : AMDI0020:01
fedc9000-fedc9fff : AMDI0020:00
 fedc9000-fedc901f : serial
fedca000-fedcafff : AMDI0020:01
 fedca000-fedca01f : serial
fedcb000-fedcbfff : AMDI0010:05
 fedcb000-fedcbfff : AMDI0010:05 AMDI0010:05
fee00000-feefffff : Reserved
 fee00000-fee00fff : Local APIC
   fee00000-fee00fff : pnp 00:04
ff000000-ffffffff : Reserved
 ff000000-ffffffff : pnp 00:04
100000000-184dbbffff : System RAM
 7b4200000-7b4e01f81 : Kernel code
 7b5000000-7b5745fff : Kernel rodata
 7b5800000-7b5a3e7ff : Kernel data
 7b5fb5000-7b63fffff : Kernel bss
184dbc0000-184fffffff : Reserved
10021000000-30020ffffff : PCI Bus 0000:c0
 10021000000-100211fffff : PCI Bus 0000:c1
 10021200000-100213fffff : PCI Bus 0000:c2
 10021400000-100215fffff : PCI Bus 0000:c3
 10021600000-100217fffff : PCI Bus 0000:c4
 30020f00000-30020ffffff : PCI Bus 0000:c8
   30020f00000-30020f7ffff : 0000:c8:00.1
   30020f80000-30020ffffff : 0000:c8:00.1
30021000000-50020ffffff : PCI Bus 0000:80
 30021000000-300211fffff : PCI Bus 0000:82
 30021200000-300213fffff : PCI Bus 0000:83
 30021400000-300215fffff : PCI Bus 0000:84
 30021600000-300217fffff : PCI Bus 0000:85
 5001a000000-5001effffff : PCI Bus 0000:81
   5001a000000-5001bffffff : 0000:81:00.1
     5001a000000-5001bffffff : mlx5_core
   5001c000000-5001dffffff : 0000:81:00.0
     5001c000000-5001dffffff : mlx5_core
   5001e000000-5001e7fffff : 0000:81:00.1
   5001e800000-5001effffff : 0000:81:00.0
 5001f100000-5001f1fffff : PCI Bus 0000:86
   5001f100000-5001f17ffff : 0000:86:00.1
   5001f180000-5001f1fffff : 0000:86:00.1
50081000000-70080ffffff : PCI Bus 0000:00
 50081000000-500811fffff : PCI Bus 0000:01
 50081200000-500813fffff : PCI Bus 0000:02
 50081400000-500815fffff : PCI Bus 0000:03
 50081600000-500817fffff : PCI Bus 0000:04
 50081800000-500819fffff : PCI Bus 0000:05
 50081a00000-50081bfffff : PCI Bus 0000:06
 50081c00000-50081dfffff : PCI Bus 0000:07
 50081e00000-50081ffffff : PCI Bus 0000:08
 70080f00000-70080ffffff : PCI Bus 0000:0a
   70080f00000-70080f7ffff : 0000:0a:00.1
   70080f80000-70080ffffff : 0000:0a:00.1
70081000000-90080ffffff : PCI Bus 0000:40
 70081000000-700811fffff : PCI Bus 0000:41
 70081200000-700813fffff : PCI Bus 0000:42
 70081400000-700815fffff : PCI Bus 0000:43
 70081600000-700817fffff : PCI Bus 0000:44
 70081800000-700819fffff : PCI Bus 0000:45
 70081a00000-70081bfffff : PCI Bus 0000:46
 70081c00000-70081dfffff : PCI Bus 0000:47
 70081e00000-70081ffffff : PCI Bus 0000:48
 90080f00000-90080ffffff : PCI Bus 0000:49
   90080f00000-90080f7ffff : 0000:49:00.1
   90080f80000-90080ffffff : 0000:49:00.1
3ffc00000000-3ffc03ffffff : Reserved

PCI: see attached

[-- Attachment #4: lspci --]
[-- Type: application/octet-stream, Size: 283072 bytes --]

00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 28
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee04000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 29
	NUMA node: 0
	Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8c00000-f8cfffff [size=1M]
	Prefetchable memory behind bridge: 0000050081000000-00000500811fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #1, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee06000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 30
	NUMA node: 0
	Bus: primary=00, secondary=02, subordinate=02, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8b00000-f8bfffff [size=1M]
	Prefetchable memory behind bridge: 0000050081200000-00000500813fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #2, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee10000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 31
	NUMA node: 0
	Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8a00000-f8afffff [size=1M]
	Prefetchable memory behind bridge: 0000050081400000-00000500815fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #3, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee12000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:01.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 32
	NUMA node: 0
	Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8900000-f89fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081600000-00000500817fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #4, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee14000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 33
	NUMA node: 0
	Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8800000-f88fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081800000-00000500819fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #5, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee16000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 34
	NUMA node: 0
	Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8700000-f87fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081a00000-0000050081bfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #6, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee18000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 35
	NUMA node: 0
	Bus: primary=00, secondary=07, subordinate=07, sec-latency=0
	I/O behind bridge: 00002000-00002fff [size=4K]
	Memory behind bridge: f8600000-f86fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081c00000-0000050081dfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #7, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1a000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 36
	NUMA node: 0
	Bus: primary=00, secondary=08, subordinate=08, sec-latency=0
	I/O behind bridge: 00001000-00001fff [size=4K]
	Memory behind bridge: f8500000-f85fffff [size=1M]
	Prefetchable memory behind bridge: 0000050081e00000-0000050081ffffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #8, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1c000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:05.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 37
	NUMA node: 0
	Bus: primary=00, secondary=09, subordinate=09, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8400000-f84fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #33, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1e000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 38
	NUMA node: 0
	Bus: primary=00, secondary=0a, subordinate=0a, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8000000-f82fffff [size=3M]
	Prefetchable memory behind bridge: 0000070080f00000-0000070080ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee08000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

00:07.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 39
	NUMA node: 0
	Bus: primary=00, secondary=0b, subordinate=0b, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f8300000-f83fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0a000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap- 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0
	Kernel driver in use: piix4_smbus
	Kernel modules: i2c_piix4

00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
	Subsystem: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge
	Control: I/O+ Mem+ BusMaster+ SpecCycle+ MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	NUMA node: 0

00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ad
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ae
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14af
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b0
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b1
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b2
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b3
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14b4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

01:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 1
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 71
	NUMA node: 0
	Region 0: Memory at f8c10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8c00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08PTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-05
	Kernel driver in use: nvme
	Kernel modules: nvme

02:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 2
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 73
	NUMA node: 0
	Region 0: Memory at f8b10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8b00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08QTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-46
	Kernel driver in use: nvme
	Kernel modules: nvme

03:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 3
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 74
	NUMA node: 0
	Region 0: Memory at f8a10000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8a00000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08ETM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-ba-bc
	Kernel driver in use: nvme
	Kernel modules: nvme

04:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 4
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 77
	NUMA node: 0
	Region 0: Memory at f8910000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8900000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A089TM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-b9-77
	Kernel driver in use: nvme
	Kernel modules: nvme

05:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 5
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 79
	NUMA node: 0
	Region 0: Memory at f8810000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8800000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08FTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-ba-fd
	Kernel driver in use: nvme
	Kernel modules: nvme

06:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 6
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 27
	NUMA node: 0
	Region 0: Memory at f8710000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8700000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08HTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bb-7f
	Kernel driver in use: nvme
	Kernel modules: nvme

07:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 7
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 81
	NUMA node: 0
	Region 0: Memory at f8610000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8600000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08KTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-01
	Kernel driver in use: nvme
	Kernel modules: nvme

08:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 8
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 84
	NUMA node: 0
	Region 0: Memory at f8510000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at f8500000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08STM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bd-c8
	Kernel driver in use: nvme
	Kernel modules: nvme

09:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/980PRO (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd Device aa89
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 86
	NUMA node: 0
	Region 0: Memory at f8410000 (64-bit, non-prefetchable) [size=16K]
	Expansion ROM at f8400000 [disabled] [size=64K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: Upstream Port
	Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [168 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [178 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [198 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [1bc v1] Lane Margining at the Receiver <?>
	Capabilities: [3a0 v1] Data Link Feature <?>
	Kernel driver in use: nvme
	Kernel modules: nvme

0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

0a:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 70080f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 70080f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 4
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

0a:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 14c9 (rev da) (prog-if 30 [XHCI])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin C routed to IRQ 553
	NUMA node: 0
	Region 0: Memory at f8100000 (64-bit, non-prefetchable) [size=1M]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D3 NoSoftRst- PME-Enable+ DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=8 Masked-
		Vector table: BAR=0 offset=000fe000
		PBA: BAR=0 offset=000ff000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 5
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci

0a:00.5 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 14ca
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin D routed to IRQ 74
	NUMA node: 0
	Region 2: Memory at f8000000 (32-bit, non-prefetchable) [size=1M]
	Region 5: Memory at f8200000 (32-bit, non-prefetchable) [size=8K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=2 Masked-
		Vector table: BAR=5 offset=00000000
		PBA: BAR=5 offset=00001000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: ccp
	Kernel modules: ccp

0b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 95
	NUMA node: 0
	Region 5: Memory at f8301000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee13000  Data: 0022
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: ahci
	Kernel modules: ahci

0b:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 97
	NUMA node: 0
	Region 5: Memory at f8300000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee15000  Data: 0022
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Kernel driver in use: ahci
	Kernel modules: ahci

40:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 41
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee0c000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 42
	NUMA node: 0
	Bus: primary=40, secondary=41, subordinate=41, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2200000-b22fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081000000-00000700811fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #9, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0e000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 43
	NUMA node: 0
	Bus: primary=40, secondary=42, subordinate=42, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2100000-b21fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081200000-00000700813fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #10, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee01000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 44
	NUMA node: 0
	Bus: primary=40, secondary=43, subordinate=43, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b2000000-b20fffff [size=1M]
	Prefetchable memory behind bridge: 0000070081400000-00000700815fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (downgraded), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #11, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee03000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:01.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 45
	NUMA node: 0
	Bus: primary=40, secondary=44, subordinate=44, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0000000-b01fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081600000-00000700817fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #12, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee05000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 46
	NUMA node: 0
	Bus: primary=40, secondary=45, subordinate=45, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0200000-b03fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081800000-00000700819fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #13, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee07000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 47
	NUMA node: 0
	Bus: primary=40, secondary=46, subordinate=46, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0400000-b05fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081a00000-0000070081bfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #14, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee11000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 48
	NUMA node: 0
	Bus: primary=40, secondary=47, subordinate=47, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: b0600000-b07fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081c00000-0000070081dfffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #15, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee13000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 49
	NUMA node: 0
	Bus: primary=40, secondary=48, subordinate=48, sec-latency=0
	I/O behind bridge: 00003000-00003fff [size=4K]
	Memory behind bridge: b0800000-b09fffff [size=2M]
	Prefetchable memory behind bridge: 0000070081e00000-0000070081ffffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #16, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee15000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 51
	NUMA node: 0
	Bus: primary=40, secondary=49, subordinate=49, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: fff00000-000fffff [disabled]
	Prefetchable memory behind bridge: 0000090080f00000-0000090080ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee17000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

41:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 9
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 87
	NUMA node: 0
	Region 0: Memory at b2210000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2200000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08LTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-42
	Kernel driver in use: nvme
	Kernel modules: nvme

42:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 10
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 90
	NUMA node: 0
	Region 0: Memory at b2110000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2100000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08MTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bc-83
	Kernel driver in use: nvme
	Kernel modules: nvme

43:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 001f (rev 01) (prog-if 02 [NVM Express])
	Subsystem: KIOXIA Corporation Device 0009
	Physical Slot: 11
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 92
	NUMA node: 0
	Region 0: Memory at b2010000 (64-bit, non-prefetchable) [size=32K]
	Expansion ROM at b2000000 [disabled] [size=64K]
	Capabilities: [80] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 16GT/s, Width x4, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 16GT/s (ok), Width x4 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Vital Product Data
		Product Name: KIOXIA ESSD
		Read-only fields:
			[PN] Part number: KIOXIA KCD81RUG15T3                     
			[EC] Engineering changes: 0001
			[SN] Serial number: Z3D0A08GTM0J        
			[MN] Manufacture ID: 1E0F
			[RV] Reserved: checksum good, 26 byte(s) reserved
		End
	Capabilities: [d0] MSI-X: Enable+ Count=129 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00004000
	Capabilities: [f8] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Vendor Specific Information: ID=1556 Rev=1 Len=008 <?>
	Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [1e0 v1] Data Link Feature <?>
	Capabilities: [200 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [300 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [340 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [378 v1] Lane Margining at the Receiver <?>
	Capabilities: [800 v1] Device Serial Number 8c-e3-8e-e3-00-9a-bb-3e
	Kernel driver in use: nvme
	Kernel modules: nvme

49:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

49:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 90080f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 90080f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

80:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 53
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee19000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

80:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ab (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 54
	NUMA node: 0
	Bus: primary=80, secondary=81, subordinate=81, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f2000000-f21fffff [size=2M]
	Prefetchable memory behind bridge: 000005001a000000-000005001effffff [size=80M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (downgraded), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #9, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd+
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1b000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Capabilities: [57c v1] Designated Vendor-Specific: Vendor=1e98 ID=0007 Rev=0 Len=16 <?>
	Capabilities: [5f4 v1] Designated Vendor-Specific: Vendor=1e98 ID=0004 Rev=0 Len=16 <?>
	Kernel driver in use: pcieport

80:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 55
	NUMA node: 0
	Bus: primary=80, secondary=82, subordinate=82, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f0000000-f01fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021000000-00000300211fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #17, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1d000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 56
	NUMA node: 0
	Bus: primary=80, secondary=83, subordinate=83, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: f0200000-f03fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021200000-00000300213fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #18, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee1f000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 57
	NUMA node: 0
	Bus: primary=80, secondary=84, subordinate=84, sec-latency=0
	I/O behind bridge: 00005000-00005fff [size=4K]
	Memory behind bridge: f0400000-f05fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021400000-00000300215fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #19, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee09000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 58
	NUMA node: 0
	Bus: primary=80, secondary=85, subordinate=85, sec-latency=0
	I/O behind bridge: 00004000-00004fff [size=4K]
	Memory behind bridge: f0600000-f07fffff [size=2M]
	Prefetchable memory behind bridge: 0000030021600000-00000300217fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #20, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0b000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

80:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

80:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 60
	NUMA node: 0
	Bus: primary=80, secondary=86, subordinate=86, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: fff00000-000fffff [disabled]
	Prefetchable memory behind bridge: 000005001f100000-000005001f1fffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee0d000  Data: 0021
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

81:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
	Subsystem: Mellanox Technologies ConnectX®-5 EN network interface card, 10/25GbE dual-port SFP28, PCIe3.0 x8, tall bracket ; MCX512A-ACAT
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 454
	NUMA node: 0
	Region 0: Memory at 5001c000000 (64-bit, prefetchable) [size=32M]
	Expansion ROM at f2100000 [disabled] [size=1M]
	Capabilities: [60] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABC, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn+
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
			 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [48] Vital Product Data
		Product Name: CX512A - ConnectX-5 SFP28
		Read-only fields:
			[PN] Part number: MCX512A-ACUT    
			[EC] Engineering changes: B7
			[V2] Vendor specific: MCX512A-ACUT    
			[SN] Serial number: MT2218K02191          
			[V3] Vendor specific: 923f71a1abc7ec1180001070fdd352f8
			[VA] Vendor specific: MLX:MODL=CX512A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
			[V0] Vendor specific: PCIeGen3 x8
			[VU] Vendor specific: MT2218K02191MLNXS0D0F0 
			[RV] Reserved: checksum good, 0 byte(s) reserved
		End
	Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [c0] Vendor Specific Information: Len=18 <?>
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot-,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 08, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number: 000
		IOVCtl:	Enable- Migration- Interrupt- MSE- ARIHierarchy+
		IOVSta:	Migration-
		Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 00
		VF offset: 2, stride: 1, Device ID: 1018
		Supported Page Size: 000007ff, System Page Size: 00000001
		Region 0: Memory at 000005001e800000 (64-bit, prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [1c0 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [230 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Kernel driver in use: mlx5_core
	Kernel modules: mlx5_core

81:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
	Subsystem: Mellanox Technologies ConnectX®-5 EN network interface card, 10/25GbE dual-port SFP28, PCIe3.0 x8, tall bracket ; MCX512A-ACAT
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 52
	NUMA node: 0
	Region 0: Memory at 5001a000000 (64-bit, prefetchable) [size=32M]
	Expansion ROM at f2000000 [disabled] [size=1M]
	Capabilities: [60] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x8, ASPM not supported
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 8GT/s (ok), Width x8 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABC, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn+
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [48] Vital Product Data
		Product Name: CX512A - ConnectX-5 SFP28
		Read-only fields:
			[PN] Part number: MCX512A-ACUT    
			[EC] Engineering changes: B7
			[V2] Vendor specific: MCX512A-ACUT    
			[SN] Serial number: MT2218K02191          
			[V3] Vendor specific: 923f71a1abc7ec1180001070fdd352f8
			[VA] Vendor specific: MLX:MODL=CX512A:MN=MLNX:CSKU=V2:UUID=V3:PCI=V0
			[V0] Vendor specific: PCIeGen3 x8
			[VU] Vendor specific: MT2218K02191MLNXS0D0F1 
			[RV] Reserved: checksum good, 0 byte(s) reserved
		End
	Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
		Vector table: BAR=0 offset=00002000
		PBA: BAR=0 offset=00003000
	Capabilities: [c0] Vendor Specific Information: Len=18 <?>
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot-,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 08, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
		IOVCap:	Migration-, Interrupt Message Number: 000
		IOVCtl:	Enable- Migration- Interrupt- MSE- ARIHierarchy-
		IOVSta:	Migration-
		Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 01
		VF offset: 9, stride: 1, Device ID: 1018
		Supported Page Size: 000007ff, System Page Size: 00000001
		Region 0: Memory at 000005001e000000 (64-bit, prefetchable)
		VF Migration: offset: 00000000, BIR: 0
	Capabilities: [230 v1] Access Control Services
		ACSCap:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Kernel driver in use: mlx5_core
	Kernel modules: mlx5_core

86:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

86:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 5001f180000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 5001f100000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

c0:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14a4 (rev 01)
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:00.3 Generic system peripheral [0807]: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 62
	NUMA node: 0
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] Express (v2) Root Complex Event Collector, MSI 00
		DevCap:	MaxPayload 128 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
			MaxPayload 128 bytes, MaxReadReq 128 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		RootCap: CRSVisible-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
	Capabilities: [84] MSI: Enable+ Count=1/1 Maskable- 64bit-
		Address: fee0f000  Data: 0021
	Capabilities: [90] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a6
	Capabilities: [98] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [110 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
		RootCmd: CERptEn- NFERptEn- FERptEn-
		RootSta: CERcvd- MultCERcvd- UERcvd- MultUERcvd-
			 FirstFatal- NonFatalMsg- FatalMsg- IntMsg 0
		ErrorSrc: ERR_COR: 0000 ERR_FATAL/NONFATAL: 0000
	Capabilities: [158 v2] Root Complex Event Collector <?>
	Kernel driver in use: pcieport

c0:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 63
	NUMA node: 0
	Bus: primary=c0, secondary=c1, subordinate=c1, sec-latency=0
	I/O behind bridge: 00006000-00006fff [size=4K]
	Memory behind bridge: ba000000-ba1fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021000000-00000100211fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #21, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee00000  Data: 0021
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 64
	NUMA node: 0
	Bus: primary=c0, secondary=c2, subordinate=c2, sec-latency=0
	I/O behind bridge: 00007000-00007fff [size=4K]
	Memory behind bridge: ba200000-ba3fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021200000-00000100213fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #22, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet- Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee02000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 65
	NUMA node: 0
	Bus: primary=c0, secondary=c3, subordinate=c3, sec-latency=0
	I/O behind bridge: 00008000-00008fff [size=4K]
	Memory behind bridge: ba400000-ba5fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021400000-00000100215fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #23, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee04000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a5 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 66
	NUMA node: 0
	Bus: primary=c0, secondary=c4, subordinate=c4, sec-latency=0
	I/O behind bridge: 00009000-00009fff [size=4K]
	Memory behind bridge: ba600000-ba7fffff [size=2M]
	Prefetchable memory behind bridge: 0000010021600000-00000100217fffff [size=2M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #247, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x16 (strange)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn+ PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise-
			Slot #24, PowerLimit 75.000W; Interlock+ NoCompl-
		SltCtl:	Enable: AttnBtn+ PwrFlt- MRL- PresDet- CmdCplt+ HPIrq+ LinkChg+
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock+
			Changed: MRL- PresDet- LinkState-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Via WAKE#, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee06000  Data: 0022
	Capabilities: [c0] Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [440 v1] Lane Margining at the Receiver <?>
	Capabilities: [4d0 v1] Native PCIe Enclosure Management <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:05.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 67
	NUMA node: 0
	Bus: primary=c0, secondary=c5, subordinate=c6, sec-latency=0
	I/O behind bridge: 0000f000-0000ffff [size=4K]
	Memory behind bridge: bc000000-bd0fffff [size=17M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #1, Speed 2.5GT/s, Width x1, ASPM not supported
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #0, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee10000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 1453
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [370 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
		L1SubCtl2:
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:05.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14aa (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin ? routed to IRQ 68
	NUMA node: 0
	Bus: primary=c0, secondary=c7, subordinate=c7, sec-latency=0
	I/O behind bridge: 0000e000-0000efff [size=4K]
	Memory behind bridge: bd400000-bd4fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #2, Speed 2.5GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
			ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #0, PowerLimit 75.000W; Interlock- NoCompl+
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
			Changed: MRL- PresDet- LinkState+
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq+ OBFF Via message/WAKE#, ExtFmt+ EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd+
			 AtomicOpsCap: Routing+ 32bit+ 64bit+ 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee12000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 1453
	Capabilities: [c8] HyperTransport: MSI Mapping Enable+ Fixed+
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [2a0 v1] Access Control Services
		ACSCap:	SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans+
		ACSCtl:	SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
	Capabilities: [370 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
		L1SubCtl2:
	Capabilities: [380 v1] Downstream Port Containment
		DpcCap:	INT Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 6, DL_ActiveErr+
		DpcCtl:	Trigger:0 Cmpl- INT- ErrCor- PoisonedTLP- SwTrigger- DL_ActiveErr-
		DpcSta:	Trigger- Reason:00 INT- RPBusy- TriggerExt:00 RP PIO ErrPtr:1f
		Source:	0000
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Capabilities: [530 v1] Extended Capability ID 0x2b
	Kernel driver in use: pcieport

c0:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 149f (rev 01)
	Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	NUMA node: 0

c0:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 69
	NUMA node: 0
	Bus: primary=c0, secondary=c8, subordinate=c8, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: bd300000-bd3fffff [size=1M]
	Prefetchable memory behind bridge: 0000030020f00000-0000030020ffffff [size=1M]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee14000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

c0:07.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14a7 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 70
	NUMA node: 0
	Bus: primary=c0, secondary=c9, subordinate=c9, sec-latency=0
	I/O behind bridge: 0000f000-00000fff [disabled]
	Memory behind bridge: bd200000-bd2fffff [size=1M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA- VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ RBE+
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive+ BWMgmt- ABWMgmt-
		RootCap: CRSVisible+
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- LN System CLS Not Supported, TPHComp+ ExtTPHComp- ARIFwd-
			 AtomicOpsCap: Routing- 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, ARIFwd-
			 AtomicOpsCtl: ReqEn- EgressBlck-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
		Address: 00000000fee16000  Data: 0022
	Capabilities: [c0] Subsystem: Advanced Micro Devices, Inc. [AMD] Device 14a4
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [400 v1] Data Link Feature <?>
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: pcieport

c5:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 06) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 26
	NUMA node: 0
	Bus: primary=c5, secondary=c6, subordinate=c6, sec-latency=32
	I/O behind bridge: 0000f000-0000ffff [size=4K]
	Memory behind bridge: bc000000-bd0fffff [size=17M]
	Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff [disabled]
	Secondary status: 66MHz+ FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR+ NoISA- VGA+ VGA16+ MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [78] Power Management version 3
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [80] Express (v2) PCI-Express to PCI/PCI-X Bridge, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ SlotPowerLimit 75.000W
		DevCtl:	CorrErr- NonFatalErr- FatalErr- UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ BrConfRtry-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (downgraded), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [c0] Subsystem: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge
	Capabilities: [100 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=01
			Status:	NegoPending- InProgress-
	Capabilities: [200 v1] Designated Vendor-Specific: Vendor=8086 ID=003e Rev=1 Len=80 <?>
	Capabilities: [2c0 v1] Designated Vendor-Specific: Vendor=8086 ID=002e Rev=1 Len=36 <?>
	Capabilities: [800 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000

c6:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 52) (prog-if 00 [VGA controller])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin A routed to IRQ 26
	NUMA node: 0
	Region 0: Memory at bc000000 (32-bit, non-prefetchable) [size=16M]
	Region 1: Memory at bd000000 (32-bit, non-prefetchable) [size=256K]
	Region 2: I/O ports at f000 [size=128]
	Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/4 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Kernel driver in use: ast
	Kernel modules: ast

c7:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 571
	NUMA node: 0
	Region 0: Memory at bd400000 (32-bit, non-prefetchable) [size=512K]
	Region 2: I/O ports at e000 [size=32]
	Region 3: Memory at bd480000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
		Address: 0000000000000000  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
		Vector table: BAR=3 offset=00000000
		PBA: BAR=3 offset=00002000
	Capabilities: [a0] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 512 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
		LnkCap:	Port #2, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s (ok), Width x1 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
		CEMsk:	RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
		AERCap:	First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
			MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
		HeaderLog: 00000000 00000000 00000000 00000000
	Capabilities: [140 v1] Device Serial Number 74-56-3c-ff-ff-d7-92-48
	Capabilities: [1a0 v1] Transaction Processing Hints
		Device specific mode supported
		Steering table in TPH capability structure
	Kernel driver in use: igb
	Kernel modules: igb

c8:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ac (rev 01)
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 1
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a

c8:00.1 System peripheral: Advanced Micro Devices, Inc. [AMD] Device 14dc
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	NUMA node: 0
	Region 0: Memory at 30020f80000 (64-bit, prefetchable) [size=512K]
	Region 2: Memory at 30020f00000 (64-bit, prefetchable) [size=512K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp+ ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable- Count=140 Masked-
		Vector table: BAR=0 offset=00040000
		PBA: BAR=0 offset=00041000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 4
		ARICtl:	MFVC- ACS-, Function Group: 0
	Capabilities: [370 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Steering table in MSI-X table
	Capabilities: [550 v1] Designated Vendor-Specific: Vendor=1022 ID=0010 Rev=1 Len=24 <?>

c8:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 14c9 (rev da) (prog-if 30 [XHCI])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin C routed to IRQ 562
	NUMA node: 0
	Region 0: Memory at bd300000 (64-bit, non-prefetchable) [size=1M]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [c0] MSI-X: Enable+ Count=8 Masked-
		Vector table: BAR=0 offset=000fe000
		PBA: BAR=0 offset=000ff000
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
		ARICap:	MFVC- ACS-, Next Function: 0
		ARICtl:	MFVC- ACS-, Function Group: 0
	Kernel driver in use: xhci_hcd
	Kernel modules: xhci_pci

c9:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 420
	NUMA node: 0
	Region 5: Memory at bd201000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkCap2: Supported Link Speeds: 2.5-32GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCtl2: Target Link Speed: 32GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee01000  Data: 002c
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Capabilities: [270 v1] Secondary PCI Express
		LnkCtl3: LnkEquIntrruptEn- PerformEqu-
		LaneErrStat: 0
	Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
	Capabilities: [450 v1] Lane Margining at the Receiver <?>
	Capabilities: [500 v1] Extended Capability ID 0x2a
	Kernel driver in use: ahci
	Kernel modules: ahci

c9:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 91) (prog-if 01 [AHCI 1.0])
	Subsystem: Gigabyte Technology Co., Ltd Device 1000
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin B routed to IRQ 453
	NUMA node: 0
	Region 5: Memory at bd200000 (32-bit, non-prefetchable) [size=2K]
	Capabilities: [48] Vendor Specific Information: Len=08 <?>
	Capabilities: [50] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [64] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
			ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
		DevCtl:	CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
			RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
			ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 32GT/s (ok), Width x16 (ok)
			TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
			 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
			 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
			 FRS- TPHComp- ExtTPHComp-
			 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
		DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- LTR- OBFF Disabled,
			 AtomicOpsCtl: ReqEn-
		LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete- EqualizationPhase1-
			 EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
			 Retimer- 2Retimers- CrosslinkRes: unsupported
	Capabilities: [a0] MSI: Enable+ Count=1/16 Maskable- 64bit+
		Address: 00000000fee11000  Data: 002d
	Capabilities: [d0] SATA HBA v1.0 InCfgSpace
	Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
	Kernel driver in use: ahci
	Kernel modules: ahci


[-- Attachment #5: Type: text/plain, Size: 3484 bytes --]



Here’s the block device structure:

nvme1n1                     259:0    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme0n1                     259:1    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme4n1                     259:2    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme2n1                     259:3    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme10n1                    259:4    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme11n1                    259:5    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme6n1                     259:6    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme5n1                     259:7    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme3n1                     259:8    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme7n1                     259:9    0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy
nvme9n1                     259:10   0    14T  0 disk
└─md127                       9:127  0 111.8T  0 raid6
 └─vgbackup-backy--crypted 253:0    0 111.8T  0 lvm
   └─backy                 253:4    0 111.8T  0 crypt /srv/backy

And the software raid setup:

Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 nvme5n1[5] nvme7n1[7] nvme4n1[4] nvme3n1[3] nvme11n1[10](S) nvme0n1[0] nvme1n1[1] nvme2n1[2] nvme6n1[6] nvme9n1[8] nvme10n1[9]
     120006369280 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
     bitmap: 1/112 pages [4KB], 65536KB chunk

unused devices: <none>

Let me know if I can provide any other info that might help diagnosing this,

Thanks and hugs,
Christian 

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-06 14:10 PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files Christian Theune
  2024-08-06 14:10 ` Christian Theune
@ 2024-08-07  2:55 ` Yu Kuai
  2024-08-07  5:31   ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-08-07  2:55 UTC (permalink / raw)
  To: Christian Theune, linux-raid@vger.kernel.org, dm-devel@redhat.com,
	yukuai (C)

Hi,

在 2024/08/06 22:10, Christian Theune 写道:
> we are seeing an issue that can be triggered with relative ease on a 
> server that has been working fine for a few weeks. The regular workload 
> is a backup utility that copies off data from virtual disk images in 
> 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array 
> that is encrypted using LUKS.
> 
> Today I started a larger rsync job from another server (that has a 
> couple of million files with around 200-300 gib in total) to migrate 
> data and we’ve seen the server suddenly lock up twice. Any IO that 
> interacts with the mountpoint (/srv/backy) will hang indefinitely. A 
> reset is required to get out of this as the machine will hang trying to 
> unmount the affected filesystem. No other messages than the hung tasks 
> are being presented - I have no indicator for hardware faults at the moment.
> 
> I’m messaging both dm-devel and linux-raid as I’m suspecting either one 
> or both (or an interaction) might be the cause.
> 
> Kernel:
> 
> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU 
> Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023

Since you can trigger this easily, I'll suggest you to try the latest
kernel release first.

Thanks,
Kuai

> 
> See the kernel config attached.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07  2:55 ` Yu Kuai
@ 2024-08-07  5:31   ` Christian Theune
  2024-08-07  6:46     ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-07  5:31 UTC (permalink / raw)
  To: Yu Kuai; +Cc: linux-raid@vger.kernel.org, dm-devel@redhat.com, yukuai (C)

Sure,

would you prefer me testing on 5.15.x or something else?

> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/08/06 22:10, Christian Theune 写道:
>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>> Kernel:
>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
> 
> Since you can trigger this easily, I'll suggest you to try the latest
> kernel release first.
> 
> Thanks,
> Kuai
> 
>> See the kernel config attached.


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07  5:31   ` Christian Theune
@ 2024-08-07  6:46     ` Christian Theune
  2024-08-07  8:59       ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-07  6:46 UTC (permalink / raw)
  To: Yu Kuai; +Cc: linux-raid@vger.kernel.org, dm-devel, yukuai (C)

I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday

> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Sure,
> 
> would you prefer me testing on 5.15.x or something else?
> 
>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> Hi,
>> 
>> 在 2024/08/06 22:10, Christian Theune 写道:
>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>> Kernel:
>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>> 
>> Since you can trigger this easily, I'll suggest you to try the latest
>> kernel release first.
>> 
>> Thanks,
>> Kuai
>> 
>>> See the kernel config attached.
> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07  6:46     ` Christian Theune
@ 2024-08-07  8:59       ` Christian Theune
  2024-08-07 21:05         ` John Stoffel
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-07  8:59 UTC (permalink / raw)
  To: Yu Kuai; +Cc: linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

i had some more time at hand and managent to compile 5.15.164. The issue is the same. After around 1h30m of work it hangs. 
I’ll try to reproduce this on a newer supported kernel if I can.

Kernel:

Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024

The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.

Output:

[ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
[ 4549.846507]       Not tainted 5.15.164 #1-NixOS
[ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
[ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4549.860435] Call Trace:
[ 4549.860437]  <TASK>
[ 4549.860440]  __schedule+0x373/0x1580
[ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
[ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
[ 4549.860453]  schedule+0x5b/0xe0
[ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
[ 4549.860459]  ? finish_wait+0x90/0x90
[ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
[ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
[ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
[ 4549.860490]  ? finish_wait+0x90/0x90
[ 4549.860492]  ? __blk_queue_split+0x516/0x580
[ 4549.860495]  md_handle_request+0x11f/0x1b0
[ 4549.860500]  md_submit_bio+0x6e/0xb0
[ 4549.860502]  __submit_bio+0x18c/0x220
[ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
[ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4549.860517]  process_one_work+0x1d3/0x360
[ 4549.860521]  worker_thread+0x4d/0x3b0
[ 4549.860523]  ? process_one_work+0x360/0x360
[ 4549.860525]  kthread+0x115/0x140
[ 4549.860528]  ? set_kthread_struct+0x50/0x50
[ 4549.860530]  ret_from_fork+0x1f/0x30
[ 4549.860535]  </TASK>
[ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
[ 4549.868461]       Not tainted 5.15.164 #1-NixOS
[ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
[ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4549.882368] Call Trace:
[ 4549.882369]  <TASK>
[ 4549.882370]  __schedule+0x373/0x1580
[ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4549.882379]  schedule+0x5b/0xe0
[ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
[ 4549.882384]  ? finish_wait+0x90/0x90
[ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
[ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
[ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.882403]  ? finish_wait+0x90/0x90
[ 4549.882406]  md_handle_request+0x11f/0x1b0
[ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
[ 4549.882413]  md_submit_bio+0x6e/0xb0
[ 4549.882415]  __submit_bio+0x18c/0x220
[ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
[ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4549.882428]  process_one_work+0x1d3/0x360
[ 4549.882431]  worker_thread+0x4d/0x3b0
[ 4549.882433]  ? process_one_work+0x360/0x360
[ 4549.882435]  kthread+0x115/0x140
[ 4549.882436]  ? set_kthread_struct+0x50/0x50
[ 4549.882438]  ret_from_fork+0x1f/0x30
[ 4549.882442]  </TASK>
[ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
[ 4549.890517]       Not tainted 5.15.164 #1-NixOS
[ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
[ 4549.904411] Call Trace:
[ 4549.904412]  <TASK>
[ 4549.904414]  __schedule+0x373/0x1580
[ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
[ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
[ 4549.904498]  schedule+0x5b/0xe0
[ 4549.904500]  io_schedule+0x42/0x70
[ 4549.904503]  wait_on_page_bit_common+0x119/0x380
[ 4549.904507]  ? __page_cache_alloc+0x80/0x80
[ 4549.904510]  wait_on_page_writeback+0x22/0x70
[ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
[ 4549.904520]  evict+0x15f/0x180
[ 4549.904524]  __dentry_kill+0xde/0x170
[ 4549.904527]  dput+0x139/0x320
[ 4549.904529]  do_renameat2+0x375/0x5f0
[ 4549.904536]  __x64_sys_rename+0x3f/0x50
[ 4549.904538]  do_syscall_64+0x34/0x80
[ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
[ 4549.904544] RIP: 0033:0x7fbf3e61a75b
[ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
[ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
[ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
[ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
[ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
[ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
[ 4549.904555]  </TASK>
[ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
[ 4549.912477]       Not tainted 5.15.164 #1-NixOS
[ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
[ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4549.926380] Call Trace:
[ 4549.926381]  <TASK>
[ 4549.926383]  __schedule+0x373/0x1580
[ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4549.926392]  schedule+0x5b/0xe0
[ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
[ 4549.926397]  ? finish_wait+0x90/0x90
[ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
[ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
[ 4549.926413]  ? finish_wait+0x90/0x90
[ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
[ 4549.926418]  md_handle_request+0x11f/0x1b0
[ 4549.926422]  md_submit_bio+0x6e/0xb0
[ 4549.926424]  __submit_bio+0x18c/0x220
[ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
[ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4549.926437]  process_one_work+0x1d3/0x360
[ 4549.926441]  worker_thread+0x4d/0x3b0
[ 4549.926442]  ? process_one_work+0x360/0x360
[ 4549.926444]  kthread+0x115/0x140
[ 4549.926447]  ? set_kthread_struct+0x50/0x50
[ 4549.926448]  ret_from_fork+0x1f/0x30
[ 4549.926454]  </TASK>
[ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
[ 4549.933603]       Not tainted 5.15.164 #1-NixOS
[ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
[ 4549.947503] Call Trace:
[ 4549.947505]  <TASK>
[ 4549.947505]  ? usleep_range_state+0x90/0x90
[ 4549.947510]  __schedule+0x373/0x1580
[ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
[ 4549.947519]  ? usleep_range_state+0x90/0x90
[ 4549.947521]  schedule+0x5b/0xe0
[ 4549.947523]  schedule_timeout+0xff/0x130
[ 4549.947526]  __wait_for_common+0xaf/0x160
[ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
[ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
[ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
[ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
[ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
[ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
[ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
[ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
[ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
[ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
[ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
[ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
[ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
[ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
[ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
[ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
[ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
[ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
[ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
[ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
[ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
[ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
[ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
[ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
[ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
[ 4549.948340]  vfs_rename+0x818/0x10d0
[ 4549.948345]  ? lookup_dcache+0x17/0x60
[ 4549.948348]  ? do_renameat2+0x57f/0x5f0
[ 4549.948350]  do_renameat2+0x57f/0x5f0
[ 4549.948355]  __x64_sys_rename+0x3f/0x50
[ 4549.948357]  do_syscall_64+0x34/0x80
[ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
[ 4549.948362] RIP: 0033:0x7fcc5520c1d7
[ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
[ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
[ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
[ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
[ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
[ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
[ 4549.948373]  </TASK>
[ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
[ 4549.956299]       Not tainted 5.15.164 #1-NixOS
[ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
[ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4549.970205] Call Trace:
[ 4549.970206]  <TASK>
[ 4549.970209]  __schedule+0x373/0x1580
[ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.970215]  schedule+0x5b/0xe0
[ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
[ 4549.970219]  ? finish_wait+0x90/0x90
[ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
[ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
[ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
[ 4549.970244]  ? finish_wait+0x90/0x90
[ 4549.970245]  ? __blk_queue_split+0x516/0x580
[ 4549.970248]  md_handle_request+0x11f/0x1b0
[ 4549.970251]  md_submit_bio+0x6e/0xb0
[ 4549.970254]  __submit_bio+0x18c/0x220
[ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
[ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4549.970267]  process_one_work+0x1d3/0x360
[ 4549.970270]  worker_thread+0x4d/0x3b0
[ 4549.970272]  ? process_one_work+0x360/0x360
[ 4549.970274]  kthread+0x115/0x140
[ 4549.970276]  ? set_kthread_struct+0x50/0x50
[ 4549.970278]  ret_from_fork+0x1f/0x30
[ 4549.970282]  </TASK>
[ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
[ 4549.978205]       Not tainted 5.15.164 #1-NixOS
[ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
[ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4549.992097] Call Trace:
[ 4549.992098]  <TASK>
[ 4549.992100]  __schedule+0x373/0x1580
[ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4549.992109]  schedule+0x5b/0xe0
[ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
[ 4549.992114]  ? finish_wait+0x90/0x90
[ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
[ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
[ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
[ 4549.992135]  ? finish_wait+0x90/0x90
[ 4549.992137]  ? __blk_queue_split+0x516/0x580
[ 4549.992139]  md_handle_request+0x11f/0x1b0
[ 4549.992142]  md_submit_bio+0x6e/0xb0
[ 4549.992144]  __submit_bio+0x18c/0x220
[ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
[ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
[ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4549.992157]  process_one_work+0x1d3/0x360
[ 4549.992160]  worker_thread+0x4d/0x3b0
[ 4549.992162]  ? process_one_work+0x360/0x360
[ 4549.992163]  kthread+0x115/0x140
[ 4549.992166]  ? set_kthread_struct+0x50/0x50
[ 4549.992168]  ret_from_fork+0x1f/0x30
[ 4549.992172]  </TASK>
[ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
[ 4550.000095]       Not tainted 5.15.164 #1-NixOS
[ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
[ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4550.013992] Call Trace:
[ 4550.013993]  <TASK>
[ 4550.013995]  __schedule+0x373/0x1580
[ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4550.014003]  schedule+0x5b/0xe0
[ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
[ 4550.014008]  ? finish_wait+0x90/0x90
[ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
[ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
[ 4550.014022]  ? finish_wait+0x90/0x90
[ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
[ 4550.014027]  md_handle_request+0x11f/0x1b0
[ 4550.014030]  md_submit_bio+0x6e/0xb0
[ 4550.014032]  __submit_bio+0x18c/0x220
[ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
[ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
[ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4550.014044]  process_one_work+0x1d3/0x360
[ 4550.014047]  worker_thread+0x4d/0x3b0
[ 4550.014049]  ? process_one_work+0x360/0x360
[ 4550.014050]  kthread+0x115/0x140
[ 4550.014052]  ? set_kthread_struct+0x50/0x50
[ 4550.014054]  ret_from_fork+0x1f/0x30
[ 4550.014058]  </TASK>
[ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
[ 4550.021982]       Not tainted 5.15.164 #1-NixOS
[ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
[ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4550.035887] Call Trace:
[ 4550.035888]  <TASK>
[ 4550.035890]  __schedule+0x373/0x1580
[ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4550.035899]  schedule+0x5b/0xe0
[ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
[ 4550.035904]  ? finish_wait+0x90/0x90
[ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
[ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
[ 4550.035919]  ? finish_wait+0x90/0x90
[ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
[ 4550.035924]  md_handle_request+0x11f/0x1b0
[ 4550.035927]  md_submit_bio+0x6e/0xb0
[ 4550.035929]  __submit_bio+0x18c/0x220
[ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
[ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
[ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4550.035942]  process_one_work+0x1d3/0x360
[ 4550.035946]  worker_thread+0x4d/0x3b0
[ 4550.035948]  ? process_one_work+0x360/0x360
[ 4550.035949]  kthread+0x115/0x140
[ 4550.035951]  ? set_kthread_struct+0x50/0x50
[ 4550.035953]  ret_from_fork+0x1f/0x30
[ 4550.035957]  </TASK>
[ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
[ 4550.043881]       Not tainted 5.15.164 #1-NixOS
[ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
[ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
[ 4550.057794] Call Trace:
[ 4550.057796]  <TASK>
[ 4550.057798]  __schedule+0x373/0x1580
[ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
[ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 4550.057806]  schedule+0x5b/0xe0
[ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
[ 4550.057810]  ? finish_wait+0x90/0x90
[ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
[ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
[ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
[ 4550.057824]  ? finish_wait+0x90/0x90
[ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
[ 4550.057828]  md_handle_request+0x11f/0x1b0
[ 4550.057831]  md_submit_bio+0x6e/0xb0
[ 4550.057834]  __submit_bio+0x18c/0x220
[ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
[ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
[ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
[ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
[ 4550.057846]  process_one_work+0x1d3/0x360
[ 4550.057848]  worker_thread+0x4d/0x3b0
[ 4550.057850]  ? process_one_work+0x360/0x360
[ 4550.057852]  kthread+0x115/0x140
[ 4550.057854]  ? set_kthread_struct+0x50/0x50
[ 4550.057856]  ret_from_fork+0x1f/0x30
[ 4550.057860]  </TASK>


> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
> 
> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
> 
>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Sure,
>> 
>> would you prefer me testing on 5.15.x or something else?
>> 
>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> 
>>> Hi,
>>> 
>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>> Kernel:
>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>> 
>>> Since you can trigger this easily, I'll suggest you to try the latest
>>> kernel release first.
>>> 
>>> Thanks,
>>> Kuai
>>> 
>>>> See the kernel config attached.
>> 
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07  8:59       ` Christian Theune
@ 2024-08-07 21:05         ` John Stoffel
  2024-08-08  1:33           ` Yu Kuai
  2024-08-08  6:02           ` Christian Theune
  0 siblings, 2 replies; 88+ messages in thread
From: John Stoffel @ 2024-08-07 21:05 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:



> i had some more time at hand and managent to compile 5.15.164. The
> issue is the same. After around 1h30m of work it hangs.  I’ll try to
> reproduce this on a newer supported kernel if I can.

Supported by who?   NixOS?  Why don't you just install linux kernel
6.6.x and see of the problem is still there?  5.15.x is ancient and
un-supported upstream now.  



> Kernel:

> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024

> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.

> Output:

> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4549.860435] Call Trace:
> [ 4549.860437]  <TASK>
> [ 4549.860440]  __schedule+0x373/0x1580
> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
> [ 4549.860453]  schedule+0x5b/0xe0
> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
> [ 4549.860459]  ? finish_wait+0x90/0x90
> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
> [ 4549.860490]  ? finish_wait+0x90/0x90
> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
> [ 4549.860495]  md_handle_request+0x11f/0x1b0
> [ 4549.860500]  md_submit_bio+0x6e/0xb0
> [ 4549.860502]  __submit_bio+0x18c/0x220
> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4549.860517]  process_one_work+0x1d3/0x360
> [ 4549.860521]  worker_thread+0x4d/0x3b0
> [ 4549.860523]  ? process_one_work+0x360/0x360
> [ 4549.860525]  kthread+0x115/0x140
> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
> [ 4549.860530]  ret_from_fork+0x1f/0x30
> [ 4549.860535]  </TASK>
> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4549.882368] Call Trace:
> [ 4549.882369]  <TASK>
> [ 4549.882370]  __schedule+0x373/0x1580
> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4549.882379]  schedule+0x5b/0xe0
> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
> [ 4549.882384]  ? finish_wait+0x90/0x90
> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.882403]  ? finish_wait+0x90/0x90
> [ 4549.882406]  md_handle_request+0x11f/0x1b0
> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
> [ 4549.882413]  md_submit_bio+0x6e/0xb0
> [ 4549.882415]  __submit_bio+0x18c/0x220
> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4549.882428]  process_one_work+0x1d3/0x360
> [ 4549.882431]  worker_thread+0x4d/0x3b0
> [ 4549.882433]  ? process_one_work+0x360/0x360
> [ 4549.882435]  kthread+0x115/0x140
> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
> [ 4549.882438]  ret_from_fork+0x1f/0x30
> [ 4549.882442]  </TASK>
> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
> [ 4549.904411] Call Trace:
> [ 4549.904412]  <TASK>
> [ 4549.904414]  __schedule+0x373/0x1580
> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
> [ 4549.904498]  schedule+0x5b/0xe0
> [ 4549.904500]  io_schedule+0x42/0x70
> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
> [ 4549.904520]  evict+0x15f/0x180
> [ 4549.904524]  __dentry_kill+0xde/0x170
> [ 4549.904527]  dput+0x139/0x320
> [ 4549.904529]  do_renameat2+0x375/0x5f0
> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
> [ 4549.904538]  do_syscall_64+0x34/0x80
> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
> [ 4549.904555]  </TASK>
> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4549.926380] Call Trace:
> [ 4549.926381]  <TASK>
> [ 4549.926383]  __schedule+0x373/0x1580
> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4549.926392]  schedule+0x5b/0xe0
> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
> [ 4549.926397]  ? finish_wait+0x90/0x90
> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
> [ 4549.926413]  ? finish_wait+0x90/0x90
> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
> [ 4549.926418]  md_handle_request+0x11f/0x1b0
> [ 4549.926422]  md_submit_bio+0x6e/0xb0
> [ 4549.926424]  __submit_bio+0x18c/0x220
> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4549.926437]  process_one_work+0x1d3/0x360
> [ 4549.926441]  worker_thread+0x4d/0x3b0
> [ 4549.926442]  ? process_one_work+0x360/0x360
> [ 4549.926444]  kthread+0x115/0x140
> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
> [ 4549.926448]  ret_from_fork+0x1f/0x30
> [ 4549.926454]  </TASK>
> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
> [ 4549.947503] Call Trace:
> [ 4549.947505]  <TASK>
> [ 4549.947505]  ? usleep_range_state+0x90/0x90
> [ 4549.947510]  __schedule+0x373/0x1580
> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
> [ 4549.947519]  ? usleep_range_state+0x90/0x90
> [ 4549.947521]  schedule+0x5b/0xe0
> [ 4549.947523]  schedule_timeout+0xff/0x130
> [ 4549.947526]  __wait_for_common+0xaf/0x160
> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
> [ 4549.948340]  vfs_rename+0x818/0x10d0
> [ 4549.948345]  ? lookup_dcache+0x17/0x60
> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
> [ 4549.948350]  do_renameat2+0x57f/0x5f0
> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
> [ 4549.948357]  do_syscall_64+0x34/0x80
> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
> [ 4549.948373]  </TASK>
> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4549.970205] Call Trace:
> [ 4549.970206]  <TASK>
> [ 4549.970209]  __schedule+0x373/0x1580
> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.970215]  schedule+0x5b/0xe0
> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
> [ 4549.970219]  ? finish_wait+0x90/0x90
> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
> [ 4549.970244]  ? finish_wait+0x90/0x90
> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
> [ 4549.970248]  md_handle_request+0x11f/0x1b0
> [ 4549.970251]  md_submit_bio+0x6e/0xb0
> [ 4549.970254]  __submit_bio+0x18c/0x220
> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4549.970267]  process_one_work+0x1d3/0x360
> [ 4549.970270]  worker_thread+0x4d/0x3b0
> [ 4549.970272]  ? process_one_work+0x360/0x360
> [ 4549.970274]  kthread+0x115/0x140
> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
> [ 4549.970278]  ret_from_fork+0x1f/0x30
> [ 4549.970282]  </TASK>
> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4549.992097] Call Trace:
> [ 4549.992098]  <TASK>
> [ 4549.992100]  __schedule+0x373/0x1580
> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4549.992109]  schedule+0x5b/0xe0
> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
> [ 4549.992114]  ? finish_wait+0x90/0x90
> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
> [ 4549.992135]  ? finish_wait+0x90/0x90
> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
> [ 4549.992139]  md_handle_request+0x11f/0x1b0
> [ 4549.992142]  md_submit_bio+0x6e/0xb0
> [ 4549.992144]  __submit_bio+0x18c/0x220
> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4549.992157]  process_one_work+0x1d3/0x360
> [ 4549.992160]  worker_thread+0x4d/0x3b0
> [ 4549.992162]  ? process_one_work+0x360/0x360
> [ 4549.992163]  kthread+0x115/0x140
> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
> [ 4549.992168]  ret_from_fork+0x1f/0x30
> [ 4549.992172]  </TASK>
> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4550.013992] Call Trace:
> [ 4550.013993]  <TASK>
> [ 4550.013995]  __schedule+0x373/0x1580
> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4550.014003]  schedule+0x5b/0xe0
> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
> [ 4550.014008]  ? finish_wait+0x90/0x90
> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
> [ 4550.014022]  ? finish_wait+0x90/0x90
> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
> [ 4550.014027]  md_handle_request+0x11f/0x1b0
> [ 4550.014030]  md_submit_bio+0x6e/0xb0
> [ 4550.014032]  __submit_bio+0x18c/0x220
> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4550.014044]  process_one_work+0x1d3/0x360
> [ 4550.014047]  worker_thread+0x4d/0x3b0
> [ 4550.014049]  ? process_one_work+0x360/0x360
> [ 4550.014050]  kthread+0x115/0x140
> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
> [ 4550.014054]  ret_from_fork+0x1f/0x30
> [ 4550.014058]  </TASK>
> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4550.035887] Call Trace:
> [ 4550.035888]  <TASK>
> [ 4550.035890]  __schedule+0x373/0x1580
> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4550.035899]  schedule+0x5b/0xe0
> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
> [ 4550.035904]  ? finish_wait+0x90/0x90
> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
> [ 4550.035919]  ? finish_wait+0x90/0x90
> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
> [ 4550.035924]  md_handle_request+0x11f/0x1b0
> [ 4550.035927]  md_submit_bio+0x6e/0xb0
> [ 4550.035929]  __submit_bio+0x18c/0x220
> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4550.035942]  process_one_work+0x1d3/0x360
> [ 4550.035946]  worker_thread+0x4d/0x3b0
> [ 4550.035948]  ? process_one_work+0x360/0x360
> [ 4550.035949]  kthread+0x115/0x140
> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
> [ 4550.035953]  ret_from_fork+0x1f/0x30
> [ 4550.035957]  </TASK>
> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
> [ 4550.057794] Call Trace:
> [ 4550.057796]  <TASK>
> [ 4550.057798]  __schedule+0x373/0x1580
> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> [ 4550.057806]  schedule+0x5b/0xe0
> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
> [ 4550.057810]  ? finish_wait+0x90/0x90
> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
> [ 4550.057824]  ? finish_wait+0x90/0x90
> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
> [ 4550.057828]  md_handle_request+0x11f/0x1b0
> [ 4550.057831]  md_submit_bio+0x6e/0xb0
> [ 4550.057834]  __submit_bio+0x18c/0x220
> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
> [ 4550.057846]  process_one_work+0x1d3/0x360
> [ 4550.057848]  worker_thread+0x4d/0x3b0
> [ 4550.057850]  ? process_one_work+0x360/0x360
> [ 4550.057852]  kthread+0x115/0x140
> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
> [ 4550.057856]  ret_from_fork+0x1f/0x30
> [ 4550.057860]  </TASK>


>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>> 
>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Sure,
>>> 
>>> would you prefer me testing on 5.15.x or something else?
>>> 
>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>> Kernel:
>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>> 
>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>> kernel release first.
>>>> 
>>>> Thanks,
>>>> Kuai
>>>> 
>>>>> See the kernel config attached.
>>> 
>>> 
>>> Liebe Grüße,
>>> Christian Theune
>>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> 
>>> 
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 

> Liebe Grüße,
> Christian Theune

> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07 21:05         ` John Stoffel
@ 2024-08-08  1:33           ` Yu Kuai
  2024-08-08  6:02           ` Christian Theune
  1 sibling, 0 replies; 88+ messages in thread
From: Yu Kuai @ 2024-08-08  1:33 UTC (permalink / raw)
  To: John Stoffel, Christian Theune
  Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/08/08 5:05, John Stoffel 写道:
>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
> 
> 
> 
>> i had some more time at hand and managent to compile 5.15.164. The
>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>> reproduce this on a newer supported kernel if I can.
> 
> Supported by who?   NixOS?  Why don't you just install linux kernel
> 6.6.x and see of the problem is still there?  5.15.x is ancient and
> un-supported upstream now.

I agree that test on 5.15.x is not necessary for now. Please test on
v6.10 directly and make sure this is not a known problem first. The
hang stack is not really helpful because lots of problems will look like
this. :(

Thanks,
Kuai

> 
> 
> 
>> Kernel:
> 
>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
> 
>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
> 
>> Output:
> 
>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.860435] Call Trace:
>> [ 4549.860437]  <TASK>
>> [ 4549.860440]  __schedule+0x373/0x1580
>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>> [ 4549.860453]  schedule+0x5b/0xe0
>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.860459]  ? finish_wait+0x90/0x90
>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.860490]  ? finish_wait+0x90/0x90
>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>> [ 4549.860502]  __submit_bio+0x18c/0x220
>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.860517]  process_one_work+0x1d3/0x360
>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>> [ 4549.860523]  ? process_one_work+0x360/0x360
>> [ 4549.860525]  kthread+0x115/0x140
>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>> [ 4549.860535]  </TASK>
>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.882368] Call Trace:
>> [ 4549.882369]  <TASK>
>> [ 4549.882370]  __schedule+0x373/0x1580
>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.882379]  schedule+0x5b/0xe0
>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.882384]  ? finish_wait+0x90/0x90
>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.882403]  ? finish_wait+0x90/0x90
>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>> [ 4549.882415]  __submit_bio+0x18c/0x220
>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.882428]  process_one_work+0x1d3/0x360
>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>> [ 4549.882433]  ? process_one_work+0x360/0x360
>> [ 4549.882435]  kthread+0x115/0x140
>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>> [ 4549.882442]  </TASK>
>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>> [ 4549.904411] Call Trace:
>> [ 4549.904412]  <TASK>
>> [ 4549.904414]  __schedule+0x373/0x1580
>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>> [ 4549.904498]  schedule+0x5b/0xe0
>> [ 4549.904500]  io_schedule+0x42/0x70
>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>> [ 4549.904520]  evict+0x15f/0x180
>> [ 4549.904524]  __dentry_kill+0xde/0x170
>> [ 4549.904527]  dput+0x139/0x320
>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>> [ 4549.904538]  do_syscall_64+0x34/0x80
>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>> [ 4549.904555]  </TASK>
>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.926380] Call Trace:
>> [ 4549.926381]  <TASK>
>> [ 4549.926383]  __schedule+0x373/0x1580
>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.926392]  schedule+0x5b/0xe0
>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.926397]  ? finish_wait+0x90/0x90
>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4549.926413]  ? finish_wait+0x90/0x90
>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>> [ 4549.926424]  __submit_bio+0x18c/0x220
>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.926437]  process_one_work+0x1d3/0x360
>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>> [ 4549.926442]  ? process_one_work+0x360/0x360
>> [ 4549.926444]  kthread+0x115/0x140
>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>> [ 4549.926454]  </TASK>
>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>> [ 4549.947503] Call Trace:
>> [ 4549.947505]  <TASK>
>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>> [ 4549.947510]  __schedule+0x373/0x1580
>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>> [ 4549.947521]  schedule+0x5b/0xe0
>> [ 4549.947523]  schedule_timeout+0xff/0x130
>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>> [ 4549.948357]  do_syscall_64+0x34/0x80
>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>> [ 4549.948373]  </TASK>
>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.970205] Call Trace:
>> [ 4549.970206]  <TASK>
>> [ 4549.970209]  __schedule+0x373/0x1580
>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970215]  schedule+0x5b/0xe0
>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.970219]  ? finish_wait+0x90/0x90
>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.970244]  ? finish_wait+0x90/0x90
>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>> [ 4549.970254]  __submit_bio+0x18c/0x220
>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.970267]  process_one_work+0x1d3/0x360
>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>> [ 4549.970272]  ? process_one_work+0x360/0x360
>> [ 4549.970274]  kthread+0x115/0x140
>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>> [ 4549.970282]  </TASK>
>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.992097] Call Trace:
>> [ 4549.992098]  <TASK>
>> [ 4549.992100]  __schedule+0x373/0x1580
>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.992109]  schedule+0x5b/0xe0
>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.992114]  ? finish_wait+0x90/0x90
>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.992135]  ? finish_wait+0x90/0x90
>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>> [ 4549.992144]  __submit_bio+0x18c/0x220
>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.992157]  process_one_work+0x1d3/0x360
>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>> [ 4549.992162]  ? process_one_work+0x360/0x360
>> [ 4549.992163]  kthread+0x115/0x140
>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>> [ 4549.992172]  </TASK>
>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.013992] Call Trace:
>> [ 4550.013993]  <TASK>
>> [ 4550.013995]  __schedule+0x373/0x1580
>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.014003]  schedule+0x5b/0xe0
>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.014008]  ? finish_wait+0x90/0x90
>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.014022]  ? finish_wait+0x90/0x90
>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>> [ 4550.014032]  __submit_bio+0x18c/0x220
>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.014044]  process_one_work+0x1d3/0x360
>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>> [ 4550.014049]  ? process_one_work+0x360/0x360
>> [ 4550.014050]  kthread+0x115/0x140
>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>> [ 4550.014058]  </TASK>
>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.035887] Call Trace:
>> [ 4550.035888]  <TASK>
>> [ 4550.035890]  __schedule+0x373/0x1580
>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.035899]  schedule+0x5b/0xe0
>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.035904]  ? finish_wait+0x90/0x90
>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.035919]  ? finish_wait+0x90/0x90
>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>> [ 4550.035929]  __submit_bio+0x18c/0x220
>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.035942]  process_one_work+0x1d3/0x360
>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>> [ 4550.035948]  ? process_one_work+0x360/0x360
>> [ 4550.035949]  kthread+0x115/0x140
>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>> [ 4550.035957]  </TASK>
>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.057794] Call Trace:
>> [ 4550.057796]  <TASK>
>> [ 4550.057798]  __schedule+0x373/0x1580
>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.057806]  schedule+0x5b/0xe0
>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.057810]  ? finish_wait+0x90/0x90
>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.057824]  ? finish_wait+0x90/0x90
>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>> [ 4550.057834]  __submit_bio+0x18c/0x220
>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.057846]  process_one_work+0x1d3/0x360
>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>> [ 4550.057850]  ? process_one_work+0x360/0x360
>> [ 4550.057852]  kthread+0x115/0x140
>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>> [ 4550.057860]  </TASK>
> 
> 
>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>
>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>
>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>
>>>> Sure,
>>>>
>>>> would you prefer me testing on 5.15.x or something else?
>>>>
>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>> Kernel:
>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>>
>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>> kernel release first.
>>>>>
>>>>> Thanks,
>>>>> Kuai
>>>>>
>>>>>> See the kernel config attached.
>>>>
>>>>
>>>> Liebe Grüße,
>>>> Christian Theune
>>>>
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>
>>>>
>>>
>>> Liebe Grüße,
>>> Christian Theune
>>>
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>
> 
>> Liebe Grüße,
>> Christian Theune
> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-07 21:05         ` John Stoffel
  2024-08-08  1:33           ` Yu Kuai
@ 2024-08-08  6:02           ` Christian Theune
  2024-08-08  6:55             ` Yu Kuai
  2024-08-08 14:23             ` John Stoffel
  1 sibling, 2 replies; 88+ messages in thread
From: Christian Theune @ 2024-08-08  6:02 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
> 
>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
> 
> 
> 
>> i had some more time at hand and managent to compile 5.15.164. The
>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>> reproduce this on a newer supported kernel if I can.
> 
> Supported by who?   NixOS?  Why don't you just install linux kernel
> 6.6.x and see of the problem is still there?  5.15.x is ancient and
> un-supported upstream now.  

I did just that. However, 5.15 “un-supported” by upstream is confusing me. It’s an official LTS kernel with an EOL of December 2026. 

Also, I’d like to note that NixOS kernels tend to be very close to upstream. The only patches that I can see are involved here are those that patch out some hard coded references to user space paths:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch

Kernel is now:

Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux

The issue is still there on 6.10.3 and now looks like shown below.

I’m aware that this is output that shows symptoms and not (necessarily) the cause. I’m currently a bit out of ideas where to look for more information and would appreciate any pointers. My suspicion is an interaction problem triggered by the use of NVMe in combination with other systems (xfs, dm-crypt and raid are the ones I’m aware of playign a role).

The use of NVMe itself likely isn’t the issue (we’ve been using NVMe on similar hosts and also in combination with dm-crypt with this kernel for a while now) and I could imagine that it triggers a race condition due to the higher performance - although the specific performance parameters aren't *that* high. Right before the lockup I see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe with dm-crypt but not with raid before.

I can perform debugging on that machine as needed, but googling for any combination of hung tasks related to nvme/xfs/crypt/raid only ends up showing me generic performance concerns from forum, an unrelated xfs issue mentioned by redhat and the list archive entry from this post.

[ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
[ 7497.027265]       Not tainted 6.10.3 #1-NixOS
[ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
[ 7497.040979] Call Trace:
[ 7497.040981]  <TASK>
[ 7497.040987]  __schedule+0x3fa/0x1550
[ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
[ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
[ 7497.041168]  schedule+0x27/0xf0
[ 7497.041171]  io_schedule+0x46/0x70
[ 7497.041173]  folio_wait_bit_common+0x13f/0x340
[ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
[ 7497.041187]  folio_wait_writeback+0x2b/0x80
[ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
[ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
[ 7497.041207]  evict+0x1b0/0x1d0
[ 7497.041212]  __dentry_kill+0x6e/0x170
[ 7497.041216]  dput+0xe5/0x1b0
[ 7497.041218]  do_renameat2+0x386/0x600
[ 7497.041226]  __x64_sys_rename+0x43/0x50
[ 7497.041229]  do_syscall_64+0xb7/0x200
[ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 7497.041236] RIP: 0033:0x7f4be586f75b
[ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
[ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
[ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
[ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
[ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
[ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
[ 7497.041277]  </TASK>
[ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
[ 7497.049410]       Not tainted 6.10.3 #1-NixOS
[ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
[ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.063140] Call Trace:
[ 7497.063141]  <TASK>
[ 7497.063145]  __schedule+0x3fa/0x1550
[ 7497.063154]  schedule+0x27/0xf0
[ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.063184]  ? bio_split_rw+0x193/0x260
[ 7497.063190]  md_handle_request+0x153/0x270
[ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.063198]  __submit_bio+0x190/0x240
[ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.063214]  process_one_work+0x18f/0x3b0
[ 7497.063219]  worker_thread+0x233/0x340
[ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
[ 7497.063225]  kthread+0xcd/0x100
[ 7497.063228]  ? __pfx_kthread+0x10/0x10
[ 7497.063230]  ret_from_fork+0x31/0x50
[ 7497.063234]  ? __pfx_kthread+0x10/0x10
[ 7497.063236]  ret_from_fork_asm+0x1a/0x30
[ 7497.063243]  </TASK>
[ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
[ 7497.071367]       Not tainted 6.10.3 #1-NixOS
[ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
[ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.085086] Call Trace:
[ 7497.085087]  <TASK>
[ 7497.085089]  __schedule+0x3fa/0x1550
[ 7497.085094]  schedule+0x27/0xf0
[ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.085116]  ? bio_split_rw+0x193/0x260
[ 7497.085120]  md_handle_request+0x153/0x270
[ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.085125]  __submit_bio+0x190/0x240
[ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.085138]  process_one_work+0x18f/0x3b0
[ 7497.085142]  worker_thread+0x233/0x340
[ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
[ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
[ 7497.085150]  kthread+0xcd/0x100
[ 7497.085152]  ? __pfx_kthread+0x10/0x10
[ 7497.085155]  ret_from_fork+0x31/0x50
[ 7497.085157]  ? __pfx_kthread+0x10/0x10
[ 7497.085159]  ret_from_fork_asm+0x1a/0x30
[ 7497.085164]  </TASK>
[ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
[ 7497.093282]       Not tainted 6.10.3 #1-NixOS
[ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
[ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.106998] Call Trace:
[ 7497.106999]  <TASK>
[ 7497.107001]  __schedule+0x3fa/0x1550
[ 7497.107006]  schedule+0x27/0xf0
[ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.107028]  ? bio_split_rw+0x193/0x260
[ 7497.107033]  md_handle_request+0x153/0x270
[ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.107039]  __submit_bio+0x190/0x240
[ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.107052]  process_one_work+0x18f/0x3b0
[ 7497.107055]  worker_thread+0x233/0x340
[ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
[ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
[ 7497.107063]  kthread+0xcd/0x100
[ 7497.107065]  ? __pfx_kthread+0x10/0x10
[ 7497.107067]  ret_from_fork+0x31/0x50
[ 7497.107069]  ? __pfx_kthread+0x10/0x10
[ 7497.107071]  ret_from_fork_asm+0x1a/0x30
[ 7497.107081]  </TASK>
[ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
[ 7497.114327]       Not tainted 6.10.3 #1-NixOS
[ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
[ 7497.128024] Call Trace:
[ 7497.128025]  <TASK>
[ 7497.128027]  __schedule+0x3fa/0x1550
[ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.128034]  schedule+0x27/0xf0
[ 7497.128036]  schedule_timeout+0x15d/0x170
[ 7497.128040]  __down_common+0x119/0x220
[ 7497.128045]  down+0x47/0x60
[ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
[ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
[ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
[ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
[ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
[ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
[ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
[ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
[ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
[ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
[ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
[ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
[ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
[ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
[ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
[ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.129003]  ? get_cached_acl+0x4c/0x90
[ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
[ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.129065]  path_openat+0xf82/0x1240
[ 7497.129072]  do_filp_open+0xc4/0x170
[ 7497.129084]  do_sys_openat2+0xab/0xe0
[ 7497.129090]  __x64_sys_openat+0x57/0xa0
[ 7497.129093]  do_syscall_64+0xb7/0x200
[ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 7497.129099] RIP: 0033:0x7f6809d2be2f
[ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
[ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
[ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
[ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
[ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
[ 7497.129133]  </TASK>
[ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
[ 7497.137277]       Not tainted 6.10.3 #1-NixOS
[ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
[ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
[ 7497.150993] Call Trace:
[ 7497.150995]  <TASK>
[ 7497.150998]  __schedule+0x3fa/0x1550
[ 7497.151007]  schedule+0x27/0xf0
[ 7497.151009]  schedule_timeout+0x15d/0x170
[ 7497.151013]  __wait_for_common+0x90/0x1c0
[ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
[ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
[ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
[ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
[ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
[ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
[ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
[ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
[ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
[ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
[ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
[ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
[ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
[ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
[ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
[ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
[ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
[ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
[ 7497.152228]  iomap_writepages+0x271/0x9b0
[ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
[ 7497.152287]  do_writepages+0x76/0x260
[ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
[ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.152300]  ? psi_group_change+0x213/0x3c0
[ 7497.152305]  __writeback_single_inode+0x3d/0x350
[ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
[ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
[ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.152328]  wb_writeback+0x193/0x310
[ 7497.152332]  wb_workfn+0x357/0x450
[ 7497.152337]  process_one_work+0x18f/0x3b0
[ 7497.152342]  worker_thread+0x233/0x340
[ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
[ 7497.152348]  kthread+0xcd/0x100
[ 7497.152352]  ? __pfx_kthread+0x10/0x10
[ 7497.152354]  ret_from_fork+0x31/0x50
[ 7497.152358]  ? __pfx_kthread+0x10/0x10
[ 7497.152360]  ret_from_fork_asm+0x1a/0x30
[ 7497.152366]  </TASK>
[ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
[ 7497.160489]       Not tainted 6.10.3 #1-NixOS
[ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
[ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.174200] Call Trace:
[ 7497.174201]  <TASK>
[ 7497.174203]  __schedule+0x3fa/0x1550
[ 7497.174208]  schedule+0x27/0xf0
[ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.174235]  ? bio_split_rw+0x193/0x260
[ 7497.174242]  md_handle_request+0x153/0x270
[ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.174248]  __submit_bio+0x190/0x240
[ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.174263]  process_one_work+0x18f/0x3b0
[ 7497.174266]  worker_thread+0x233/0x340
[ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
[ 7497.174271]  kthread+0xcd/0x100
[ 7497.174273]  ? __pfx_kthread+0x10/0x10
[ 7497.174276]  ret_from_fork+0x31/0x50
[ 7497.174277]  ? __pfx_kthread+0x10/0x10
[ 7497.174279]  ret_from_fork_asm+0x1a/0x30
[ 7497.174285]  </TASK>
[ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
[ 7497.182499]       Not tainted 6.10.3 #1-NixOS
[ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
[ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[ 7497.196281] Call Trace:
[ 7497.196282]  <TASK>
[ 7497.196285]  __schedule+0x3fa/0x1550
[ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.196293]  schedule+0x27/0xf0
[ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
[ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
[ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
[ 7497.196400]  xlog_write+0x310/0x470 [xfs]
[ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
[ 7497.196503]  process_one_work+0x18f/0x3b0
[ 7497.196507]  worker_thread+0x233/0x340
[ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
[ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
[ 7497.196515]  kthread+0xcd/0x100
[ 7497.196517]  ? __pfx_kthread+0x10/0x10
[ 7497.196519]  ret_from_fork+0x31/0x50
[ 7497.196522]  ? __pfx_kthread+0x10/0x10
[ 7497.196524]  ret_from_fork_asm+0x1a/0x30
[ 7497.196529]  </TASK>
[ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
[ 7497.204648]       Not tainted 6.10.3 #1-NixOS
[ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
[ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.218359] Call Trace:
[ 7497.218360]  <TASK>
[ 7497.218363]  __schedule+0x3fa/0x1550
[ 7497.218369]  schedule+0x27/0xf0
[ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.218392]  ? bio_split_rw+0x193/0x260
[ 7497.218398]  md_handle_request+0x153/0x270
[ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.218405]  __submit_bio+0x190/0x240
[ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.218419]  process_one_work+0x18f/0x3b0
[ 7497.218423]  worker_thread+0x233/0x340
[ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
[ 7497.218428]  kthread+0xcd/0x100
[ 7497.218430]  ? __pfx_kthread+0x10/0x10
[ 7497.218433]  ret_from_fork+0x31/0x50
[ 7497.218435]  ? __pfx_kthread+0x10/0x10
[ 7497.218437]  ret_from_fork_asm+0x1a/0x30
[ 7497.218442]  </TASK>
[ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
[ 7497.226572]       Not tainted 6.10.3 #1-NixOS
[ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
[ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[ 7497.240287] Call Trace:
[ 7497.240288]  <TASK>
[ 7497.240290]  __schedule+0x3fa/0x1550
[ 7497.240298]  schedule+0x27/0xf0
[ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
[ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
[ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
[ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
[ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
[ 7497.240322]  ? bio_split_rw+0x193/0x260
[ 7497.240328]  md_handle_request+0x153/0x270
[ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.240334]  __submit_bio+0x190/0x240
[ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
[ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
[ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
[ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[ 7497.240348]  process_one_work+0x18f/0x3b0
[ 7497.240353]  worker_thread+0x233/0x340
[ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
[ 7497.240358]  kthread+0xcd/0x100
[ 7497.240361]  ? __pfx_kthread+0x10/0x10
[ 7497.240364]  ret_from_fork+0x31/0x50
[ 7497.240366]  ? __pfx_kthread+0x10/0x10
[ 7497.240368]  ret_from_fork_asm+0x1a/0x30
[ 7497.240375]  </TASK>
[ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings

> 
> 
> 
>> Kernel:
> 
>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
> 
>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
> 
>> Output:
> 
>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.860435] Call Trace:
>> [ 4549.860437]  <TASK>
>> [ 4549.860440]  __schedule+0x373/0x1580
>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>> [ 4549.860453]  schedule+0x5b/0xe0
>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.860459]  ? finish_wait+0x90/0x90
>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.860490]  ? finish_wait+0x90/0x90
>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>> [ 4549.860502]  __submit_bio+0x18c/0x220
>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.860517]  process_one_work+0x1d3/0x360
>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>> [ 4549.860523]  ? process_one_work+0x360/0x360
>> [ 4549.860525]  kthread+0x115/0x140
>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>> [ 4549.860535]  </TASK>
>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.882368] Call Trace:
>> [ 4549.882369]  <TASK>
>> [ 4549.882370]  __schedule+0x373/0x1580
>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.882379]  schedule+0x5b/0xe0
>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.882384]  ? finish_wait+0x90/0x90
>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.882403]  ? finish_wait+0x90/0x90
>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>> [ 4549.882415]  __submit_bio+0x18c/0x220
>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.882428]  process_one_work+0x1d3/0x360
>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>> [ 4549.882433]  ? process_one_work+0x360/0x360
>> [ 4549.882435]  kthread+0x115/0x140
>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>> [ 4549.882442]  </TASK>
>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>> [ 4549.904411] Call Trace:
>> [ 4549.904412]  <TASK>
>> [ 4549.904414]  __schedule+0x373/0x1580
>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>> [ 4549.904498]  schedule+0x5b/0xe0
>> [ 4549.904500]  io_schedule+0x42/0x70
>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>> [ 4549.904520]  evict+0x15f/0x180
>> [ 4549.904524]  __dentry_kill+0xde/0x170
>> [ 4549.904527]  dput+0x139/0x320
>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>> [ 4549.904538]  do_syscall_64+0x34/0x80
>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>> [ 4549.904555]  </TASK>
>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.926380] Call Trace:
>> [ 4549.926381]  <TASK>
>> [ 4549.926383]  __schedule+0x373/0x1580
>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.926392]  schedule+0x5b/0xe0
>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.926397]  ? finish_wait+0x90/0x90
>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4549.926413]  ? finish_wait+0x90/0x90
>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>> [ 4549.926424]  __submit_bio+0x18c/0x220
>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.926437]  process_one_work+0x1d3/0x360
>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>> [ 4549.926442]  ? process_one_work+0x360/0x360
>> [ 4549.926444]  kthread+0x115/0x140
>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>> [ 4549.926454]  </TASK>
>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>> [ 4549.947503] Call Trace:
>> [ 4549.947505]  <TASK>
>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>> [ 4549.947510]  __schedule+0x373/0x1580
>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>> [ 4549.947521]  schedule+0x5b/0xe0
>> [ 4549.947523]  schedule_timeout+0xff/0x130
>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>> [ 4549.948357]  do_syscall_64+0x34/0x80
>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>> [ 4549.948373]  </TASK>
>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.970205] Call Trace:
>> [ 4549.970206]  <TASK>
>> [ 4549.970209]  __schedule+0x373/0x1580
>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970215]  schedule+0x5b/0xe0
>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.970219]  ? finish_wait+0x90/0x90
>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.970244]  ? finish_wait+0x90/0x90
>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>> [ 4549.970254]  __submit_bio+0x18c/0x220
>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.970267]  process_one_work+0x1d3/0x360
>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>> [ 4549.970272]  ? process_one_work+0x360/0x360
>> [ 4549.970274]  kthread+0x115/0x140
>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>> [ 4549.970282]  </TASK>
>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4549.992097] Call Trace:
>> [ 4549.992098]  <TASK>
>> [ 4549.992100]  __schedule+0x373/0x1580
>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4549.992109]  schedule+0x5b/0xe0
>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4549.992114]  ? finish_wait+0x90/0x90
>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>> [ 4549.992135]  ? finish_wait+0x90/0x90
>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>> [ 4549.992144]  __submit_bio+0x18c/0x220
>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4549.992157]  process_one_work+0x1d3/0x360
>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>> [ 4549.992162]  ? process_one_work+0x360/0x360
>> [ 4549.992163]  kthread+0x115/0x140
>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>> [ 4549.992172]  </TASK>
>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.013992] Call Trace:
>> [ 4550.013993]  <TASK>
>> [ 4550.013995]  __schedule+0x373/0x1580
>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.014003]  schedule+0x5b/0xe0
>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.014008]  ? finish_wait+0x90/0x90
>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.014022]  ? finish_wait+0x90/0x90
>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>> [ 4550.014032]  __submit_bio+0x18c/0x220
>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.014044]  process_one_work+0x1d3/0x360
>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>> [ 4550.014049]  ? process_one_work+0x360/0x360
>> [ 4550.014050]  kthread+0x115/0x140
>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>> [ 4550.014058]  </TASK>
>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.035887] Call Trace:
>> [ 4550.035888]  <TASK>
>> [ 4550.035890]  __schedule+0x373/0x1580
>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.035899]  schedule+0x5b/0xe0
>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.035904]  ? finish_wait+0x90/0x90
>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.035919]  ? finish_wait+0x90/0x90
>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>> [ 4550.035929]  __submit_bio+0x18c/0x220
>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.035942]  process_one_work+0x1d3/0x360
>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>> [ 4550.035948]  ? process_one_work+0x360/0x360
>> [ 4550.035949]  kthread+0x115/0x140
>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>> [ 4550.035957]  </TASK>
>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>> [ 4550.057794] Call Trace:
>> [ 4550.057796]  <TASK>
>> [ 4550.057798]  __schedule+0x373/0x1580
>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>> [ 4550.057806]  schedule+0x5b/0xe0
>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>> [ 4550.057810]  ? finish_wait+0x90/0x90
>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>> [ 4550.057824]  ? finish_wait+0x90/0x90
>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>> [ 4550.057834]  __submit_bio+0x18c/0x220
>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>> [ 4550.057846]  process_one_work+0x1d3/0x360
>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>> [ 4550.057850]  ? process_one_work+0x360/0x360
>> [ 4550.057852]  kthread+0x115/0x140
>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>> [ 4550.057860]  </TASK>
> 
> 
>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>> 
>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Sure,
>>>> 
>>>> would you prefer me testing on 5.15.x or something else?
>>>> 
>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>> Kernel:
>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>> 
>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>> kernel release first.
>>>>> 
>>>>> Thanks,
>>>>> Kuai
>>>>> 
>>>>>> See the kernel config attached.
>>>> 
>>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>>>> 
>>> 
>>> Liebe Grüße,
>>> Christian Theune
>>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> 
> 
>> Liebe Grüße,
>> Christian Theune
> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08  6:02           ` Christian Theune
@ 2024-08-08  6:55             ` Yu Kuai
  2024-08-08  7:06               ` Christian Theune
  2024-08-08 14:23             ` John Stoffel
  1 sibling, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-08-08  6:55 UTC (permalink / raw)
  To: Christian Theune, John Stoffel
  Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/08/08 14:02, Christian Theune 写道:
> Hi,
> 
>> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
>>
>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>>
>>
>>
>>> i had some more time at hand and managent to compile 5.15.164. The
>>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>>> reproduce this on a newer supported kernel if I can.
>>
>> Supported by who?   NixOS?  Why don't you just install linux kernel
>> 6.6.x and see of the problem is still there?  5.15.x is ancient and
>> un-supported upstream now.
> 
> I did just that. However, 5.15 “un-supported” by upstream is confusing me. It’s an official LTS kernel with an EOL of December 2026.
> 
> Also, I’d like to note that NixOS kernels tend to be very close to upstream. The only patches that I can see are involved here are those that patch out some hard coded references to user space paths:
> 
> https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch
> 
> Kernel is now:
> 
> Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux
> 
> The issue is still there on 6.10.3 and now looks like shown below.
> 
> I’m aware that this is output that shows symptoms and not (necessarily) the cause. I’m currently a bit out of ideas where to look for more information and would appreciate any pointers. My suspicion is an interaction problem triggered by the use of NVMe in combination with other systems (xfs, dm-crypt and raid are the ones I’m aware of playign a role).
> 
> The use of NVMe itself likely isn’t the issue (we’ve been using NVMe on similar hosts and also in combination with dm-crypt with this kernel for a while now) and I could imagine that it triggers a race condition due to the higher performance - although the specific performance parameters aren't *that* high. Right before the lockup I see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe with dm-crypt but not with raid before.
> 
> I can perform debugging on that machine as needed, but googling for any combination of hung tasks related to nvme/xfs/crypt/raid only ends up showing me generic performance concerns from forum, an unrelated xfs issue mentioned by redhat and the list archive entry from this post.

Since 6.10 is the same, I take a closer look at this.

At first is this a new problem or a new scenario?
> 
> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
> [ 7497.040979] Call Trace:
> [ 7497.040981]  <TASK>
> [ 7497.040987]  __schedule+0x3fa/0x1550
> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
> [ 7497.041168]  schedule+0x27/0xf0
> [ 7497.041171]  io_schedule+0x46/0x70
> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
> [ 7497.041207]  evict+0x1b0/0x1d0
> [ 7497.041212]  __dentry_kill+0x6e/0x170
> [ 7497.041216]  dput+0xe5/0x1b0
> [ 7497.041218]  do_renameat2+0x386/0x600
> [ 7497.041226]  __x64_sys_rename+0x43/0x50
> [ 7497.041229]  do_syscall_64+0xb7/0x200
> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [ 7497.041236] RIP: 0033:0x7f4be586f75b
> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
> [ 7497.041277]  </TASK>
> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.063140] Call Trace:
> [ 7497.063141]  <TASK>
> [ 7497.063145]  __schedule+0x3fa/0x1550
> [ 7497.063154]  schedule+0x27/0xf0
> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0

 From code review, the counter for the bit reaches COUNTER_MAX, means
there are already lots of write IO issued in the range represented by
this bit. And md_bitmap_startwrite() is waiting for such IO to be done
to issue new IO. Hence either IO is handered too slow or deadlock is
triggered.

So for the next step, can you do the following test for problem
identificatioin?

1) With the hangtaks, are the underlying disks idle?(By iostat). And can
you please collect /sys/block/[disk]/inflight for both raid and
underlying disks.
2) Can you still reporduce the problem with raid1/raid10?
3) Can you still reporduce the problem with bitmap disabled? By adding
md-bitmap=none while creating the array.

Thanks,
Kuai

> [ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.063184]  ? bio_split_rw+0x193/0x260
> [ 7497.063190]  md_handle_request+0x153/0x270
> [ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.063198]  __submit_bio+0x190/0x240
> [ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.063214]  process_one_work+0x18f/0x3b0
> [ 7497.063219]  worker_thread+0x233/0x340
> [ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.063225]  kthread+0xcd/0x100
> [ 7497.063228]  ? __pfx_kthread+0x10/0x10
> [ 7497.063230]  ret_from_fork+0x31/0x50
> [ 7497.063234]  ? __pfx_kthread+0x10/0x10
> [ 7497.063236]  ret_from_fork_asm+0x1a/0x30
> [ 7497.063243]  </TASK>
> [ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
> [ 7497.071367]       Not tainted 6.10.3 #1-NixOS
> [ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
> [ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.085086] Call Trace:
> [ 7497.085087]  <TASK>
> [ 7497.085089]  __schedule+0x3fa/0x1550
> [ 7497.085094]  schedule+0x27/0xf0
> [ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.085116]  ? bio_split_rw+0x193/0x260
> [ 7497.085120]  md_handle_request+0x153/0x270
> [ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.085125]  __submit_bio+0x190/0x240
> [ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.085138]  process_one_work+0x18f/0x3b0
> [ 7497.085142]  worker_thread+0x233/0x340
> [ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.085150]  kthread+0xcd/0x100
> [ 7497.085152]  ? __pfx_kthread+0x10/0x10
> [ 7497.085155]  ret_from_fork+0x31/0x50
> [ 7497.085157]  ? __pfx_kthread+0x10/0x10
> [ 7497.085159]  ret_from_fork_asm+0x1a/0x30
> [ 7497.085164]  </TASK>
> [ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
> [ 7497.093282]       Not tainted 6.10.3 #1-NixOS
> [ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
> [ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.106998] Call Trace:
> [ 7497.106999]  <TASK>
> [ 7497.107001]  __schedule+0x3fa/0x1550
> [ 7497.107006]  schedule+0x27/0xf0
> [ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.107028]  ? bio_split_rw+0x193/0x260
> [ 7497.107033]  md_handle_request+0x153/0x270
> [ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.107039]  __submit_bio+0x190/0x240
> [ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.107052]  process_one_work+0x18f/0x3b0
> [ 7497.107055]  worker_thread+0x233/0x340
> [ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.107063]  kthread+0xcd/0x100
> [ 7497.107065]  ? __pfx_kthread+0x10/0x10
> [ 7497.107067]  ret_from_fork+0x31/0x50
> [ 7497.107069]  ? __pfx_kthread+0x10/0x10
> [ 7497.107071]  ret_from_fork_asm+0x1a/0x30
> [ 7497.107081]  </TASK>
> [ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
> [ 7497.114327]       Not tainted 6.10.3 #1-NixOS
> [ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
> [ 7497.128024] Call Trace:
> [ 7497.128025]  <TASK>
> [ 7497.128027]  __schedule+0x3fa/0x1550
> [ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.128034]  schedule+0x27/0xf0
> [ 7497.128036]  schedule_timeout+0x15d/0x170
> [ 7497.128040]  __down_common+0x119/0x220
> [ 7497.128045]  down+0x47/0x60
> [ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
> [ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
> [ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
> [ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
> [ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
> [ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
> [ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
> [ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
> [ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
> [ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
> [ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
> [ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
> [ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129003]  ? get_cached_acl+0x4c/0x90
> [ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
> [ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129065]  path_openat+0xf82/0x1240
> [ 7497.129072]  do_filp_open+0xc4/0x170
> [ 7497.129084]  do_sys_openat2+0xab/0xe0
> [ 7497.129090]  __x64_sys_openat+0x57/0xa0
> [ 7497.129093]  do_syscall_64+0xb7/0x200
> [ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [ 7497.129099] RIP: 0033:0x7f6809d2be2f
> [ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> [ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
> [ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
> [ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
> [ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
> [ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
> [ 7497.129133]  </TASK>
> [ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
> [ 7497.137277]       Not tainted 6.10.3 #1-NixOS
> [ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
> [ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
> [ 7497.150993] Call Trace:
> [ 7497.150995]  <TASK>
> [ 7497.150998]  __schedule+0x3fa/0x1550
> [ 7497.151007]  schedule+0x27/0xf0
> [ 7497.151009]  schedule_timeout+0x15d/0x170
> [ 7497.151013]  __wait_for_common+0x90/0x1c0
> [ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
> [ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
> [ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
> [ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
> [ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
> [ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
> [ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
> [ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
> [ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
> [ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
> [ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
> [ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
> [ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
> [ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
> [ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
> [ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
> [ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
> [ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
> [ 7497.152228]  iomap_writepages+0x271/0x9b0
> [ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
> [ 7497.152287]  do_writepages+0x76/0x260
> [ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
> [ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152300]  ? psi_group_change+0x213/0x3c0
> [ 7497.152305]  __writeback_single_inode+0x3d/0x350
> [ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
> [ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
> [ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152328]  wb_writeback+0x193/0x310
> [ 7497.152332]  wb_workfn+0x357/0x450
> [ 7497.152337]  process_one_work+0x18f/0x3b0
> [ 7497.152342]  worker_thread+0x233/0x340
> [ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.152348]  kthread+0xcd/0x100
> [ 7497.152352]  ? __pfx_kthread+0x10/0x10
> [ 7497.152354]  ret_from_fork+0x31/0x50
> [ 7497.152358]  ? __pfx_kthread+0x10/0x10
> [ 7497.152360]  ret_from_fork_asm+0x1a/0x30
> [ 7497.152366]  </TASK>
> [ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
> [ 7497.160489]       Not tainted 6.10.3 #1-NixOS
> [ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
> [ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.174200] Call Trace:
> [ 7497.174201]  <TASK>
> [ 7497.174203]  __schedule+0x3fa/0x1550
> [ 7497.174208]  schedule+0x27/0xf0
> [ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.174235]  ? bio_split_rw+0x193/0x260
> [ 7497.174242]  md_handle_request+0x153/0x270
> [ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.174248]  __submit_bio+0x190/0x240
> [ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.174263]  process_one_work+0x18f/0x3b0
> [ 7497.174266]  worker_thread+0x233/0x340
> [ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.174271]  kthread+0xcd/0x100
> [ 7497.174273]  ? __pfx_kthread+0x10/0x10
> [ 7497.174276]  ret_from_fork+0x31/0x50
> [ 7497.174277]  ? __pfx_kthread+0x10/0x10
> [ 7497.174279]  ret_from_fork_asm+0x1a/0x30
> [ 7497.174285]  </TASK>
> [ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
> [ 7497.182499]       Not tainted 6.10.3 #1-NixOS
> [ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
> [ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
> [ 7497.196281] Call Trace:
> [ 7497.196282]  <TASK>
> [ 7497.196285]  __schedule+0x3fa/0x1550
> [ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.196293]  schedule+0x27/0xf0
> [ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
> [ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
> [ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
> [ 7497.196400]  xlog_write+0x310/0x470 [xfs]
> [ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
> [ 7497.196503]  process_one_work+0x18f/0x3b0
> [ 7497.196507]  worker_thread+0x233/0x340
> [ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.196515]  kthread+0xcd/0x100
> [ 7497.196517]  ? __pfx_kthread+0x10/0x10
> [ 7497.196519]  ret_from_fork+0x31/0x50
> [ 7497.196522]  ? __pfx_kthread+0x10/0x10
> [ 7497.196524]  ret_from_fork_asm+0x1a/0x30
> [ 7497.196529]  </TASK>
> [ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
> [ 7497.204648]       Not tainted 6.10.3 #1-NixOS
> [ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
> [ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.218359] Call Trace:
> [ 7497.218360]  <TASK>
> [ 7497.218363]  __schedule+0x3fa/0x1550
> [ 7497.218369]  schedule+0x27/0xf0
> [ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.218392]  ? bio_split_rw+0x193/0x260
> [ 7497.218398]  md_handle_request+0x153/0x270
> [ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.218405]  __submit_bio+0x190/0x240
> [ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.218419]  process_one_work+0x18f/0x3b0
> [ 7497.218423]  worker_thread+0x233/0x340
> [ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.218428]  kthread+0xcd/0x100
> [ 7497.218430]  ? __pfx_kthread+0x10/0x10
> [ 7497.218433]  ret_from_fork+0x31/0x50
> [ 7497.218435]  ? __pfx_kthread+0x10/0x10
> [ 7497.218437]  ret_from_fork_asm+0x1a/0x30
> [ 7497.218442]  </TASK>
> [ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
> [ 7497.226572]       Not tainted 6.10.3 #1-NixOS
> [ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
> [ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.240287] Call Trace:
> [ 7497.240288]  <TASK>
> [ 7497.240290]  __schedule+0x3fa/0x1550
> [ 7497.240298]  schedule+0x27/0xf0
> [ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.240322]  ? bio_split_rw+0x193/0x260
> [ 7497.240328]  md_handle_request+0x153/0x270
> [ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.240334]  __submit_bio+0x190/0x240
> [ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.240348]  process_one_work+0x18f/0x3b0
> [ 7497.240353]  worker_thread+0x233/0x340
> [ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.240358]  kthread+0xcd/0x100
> [ 7497.240361]  ? __pfx_kthread+0x10/0x10
> [ 7497.240364]  ret_from_fork+0x31/0x50
> [ 7497.240366]  ? __pfx_kthread+0x10/0x10
> [ 7497.240368]  ret_from_fork_asm+0x1a/0x30
> [ 7497.240375]  </TASK>
> [ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> 
>>
>>
>>
>>> Kernel:
>>
>>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
>>
>>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
>>
>>> Output:
>>
>>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.860435] Call Trace:
>>> [ 4549.860437]  <TASK>
>>> [ 4549.860440]  __schedule+0x373/0x1580
>>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>>> [ 4549.860453]  schedule+0x5b/0xe0
>>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.860459]  ? finish_wait+0x90/0x90
>>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.860490]  ? finish_wait+0x90/0x90
>>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>>> [ 4549.860502]  __submit_bio+0x18c/0x220
>>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.860517]  process_one_work+0x1d3/0x360
>>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>>> [ 4549.860523]  ? process_one_work+0x360/0x360
>>> [ 4549.860525]  kthread+0x115/0x140
>>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>>> [ 4549.860535]  </TASK>
>>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.882368] Call Trace:
>>> [ 4549.882369]  <TASK>
>>> [ 4549.882370]  __schedule+0x373/0x1580
>>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.882379]  schedule+0x5b/0xe0
>>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.882384]  ? finish_wait+0x90/0x90
>>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.882403]  ? finish_wait+0x90/0x90
>>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>>> [ 4549.882415]  __submit_bio+0x18c/0x220
>>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.882428]  process_one_work+0x1d3/0x360
>>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>>> [ 4549.882433]  ? process_one_work+0x360/0x360
>>> [ 4549.882435]  kthread+0x115/0x140
>>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>>> [ 4549.882442]  </TASK>
>>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>>> [ 4549.904411] Call Trace:
>>> [ 4549.904412]  <TASK>
>>> [ 4549.904414]  __schedule+0x373/0x1580
>>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>>> [ 4549.904498]  schedule+0x5b/0xe0
>>> [ 4549.904500]  io_schedule+0x42/0x70
>>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>>> [ 4549.904520]  evict+0x15f/0x180
>>> [ 4549.904524]  __dentry_kill+0xde/0x170
>>> [ 4549.904527]  dput+0x139/0x320
>>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>>> [ 4549.904538]  do_syscall_64+0x34/0x80
>>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>>> [ 4549.904555]  </TASK>
>>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.926380] Call Trace:
>>> [ 4549.926381]  <TASK>
>>> [ 4549.926383]  __schedule+0x373/0x1580
>>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.926392]  schedule+0x5b/0xe0
>>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.926397]  ? finish_wait+0x90/0x90
>>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4549.926413]  ? finish_wait+0x90/0x90
>>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>>> [ 4549.926424]  __submit_bio+0x18c/0x220
>>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.926437]  process_one_work+0x1d3/0x360
>>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>>> [ 4549.926442]  ? process_one_work+0x360/0x360
>>> [ 4549.926444]  kthread+0x115/0x140
>>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>>> [ 4549.926454]  </TASK>
>>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>>> [ 4549.947503] Call Trace:
>>> [ 4549.947505]  <TASK>
>>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>>> [ 4549.947510]  __schedule+0x373/0x1580
>>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>>> [ 4549.947521]  schedule+0x5b/0xe0
>>> [ 4549.947523]  schedule_timeout+0xff/0x130
>>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>>> [ 4549.948357]  do_syscall_64+0x34/0x80
>>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>>> [ 4549.948373]  </TASK>
>>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.970205] Call Trace:
>>> [ 4549.970206]  <TASK>
>>> [ 4549.970209]  __schedule+0x373/0x1580
>>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970215]  schedule+0x5b/0xe0
>>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.970219]  ? finish_wait+0x90/0x90
>>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.970244]  ? finish_wait+0x90/0x90
>>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>>> [ 4549.970254]  __submit_bio+0x18c/0x220
>>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.970267]  process_one_work+0x1d3/0x360
>>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>>> [ 4549.970272]  ? process_one_work+0x360/0x360
>>> [ 4549.970274]  kthread+0x115/0x140
>>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>>> [ 4549.970282]  </TASK>
>>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.992097] Call Trace:
>>> [ 4549.992098]  <TASK>
>>> [ 4549.992100]  __schedule+0x373/0x1580
>>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.992109]  schedule+0x5b/0xe0
>>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.992114]  ? finish_wait+0x90/0x90
>>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.992135]  ? finish_wait+0x90/0x90
>>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>>> [ 4549.992144]  __submit_bio+0x18c/0x220
>>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.992157]  process_one_work+0x1d3/0x360
>>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>>> [ 4549.992162]  ? process_one_work+0x360/0x360
>>> [ 4549.992163]  kthread+0x115/0x140
>>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>>> [ 4549.992172]  </TASK>
>>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.013992] Call Trace:
>>> [ 4550.013993]  <TASK>
>>> [ 4550.013995]  __schedule+0x373/0x1580
>>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.014003]  schedule+0x5b/0xe0
>>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.014008]  ? finish_wait+0x90/0x90
>>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.014022]  ? finish_wait+0x90/0x90
>>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>>> [ 4550.014032]  __submit_bio+0x18c/0x220
>>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.014044]  process_one_work+0x1d3/0x360
>>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>>> [ 4550.014049]  ? process_one_work+0x360/0x360
>>> [ 4550.014050]  kthread+0x115/0x140
>>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>>> [ 4550.014058]  </TASK>
>>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.035887] Call Trace:
>>> [ 4550.035888]  <TASK>
>>> [ 4550.035890]  __schedule+0x373/0x1580
>>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.035899]  schedule+0x5b/0xe0
>>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.035904]  ? finish_wait+0x90/0x90
>>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.035919]  ? finish_wait+0x90/0x90
>>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>>> [ 4550.035929]  __submit_bio+0x18c/0x220
>>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.035942]  process_one_work+0x1d3/0x360
>>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>>> [ 4550.035948]  ? process_one_work+0x360/0x360
>>> [ 4550.035949]  kthread+0x115/0x140
>>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>>> [ 4550.035957]  </TASK>
>>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.057794] Call Trace:
>>> [ 4550.057796]  <TASK>
>>> [ 4550.057798]  __schedule+0x373/0x1580
>>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.057806]  schedule+0x5b/0xe0
>>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.057810]  ? finish_wait+0x90/0x90
>>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.057824]  ? finish_wait+0x90/0x90
>>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>>> [ 4550.057834]  __submit_bio+0x18c/0x220
>>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.057846]  process_one_work+0x1d3/0x360
>>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>>> [ 4550.057850]  ? process_one_work+0x360/0x360
>>> [ 4550.057852]  kthread+0x115/0x140
>>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>>> [ 4550.057860]  </TASK>
>>
>>
>>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>>
>>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>>
>>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>
>>>>> Sure,
>>>>>
>>>>> would you prefer me testing on 5.15.x or something else?
>>>>>
>>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>>> Kernel:
>>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>>>
>>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>>> kernel release first.
>>>>>>
>>>>>> Thanks,
>>>>>> Kuai
>>>>>>
>>>>>>> See the kernel config attached.
>>>>>
>>>>>
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>>
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>
>>>>>
>>>>
>>>> Liebe Grüße,
>>>> Christian Theune
>>>>
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>
>>
>>> Liebe Grüße,
>>> Christian Theune
>>
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>
>>
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08  6:55             ` Yu Kuai
@ 2024-08-08  7:06               ` Christian Theune
  2024-08-08  8:53                 ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-08  7:06 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 8. Aug 2024, at 08:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Since 6.10 is the same, I take a closer look at this.

Much appreciated, thanks!

> At first is this a new problem or a new scenario?

Both? ;)

The user-level scenario (rsyncing those files) is a regular task that has been working fine. The new aspect of the scenario is only that we’re now using NVMe. The problem has not been observed before.

>> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
>> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
>> [ 7497.040979] Call Trace:
>> [ 7497.040981]  <TASK>
>> [ 7497.040987]  __schedule+0x3fa/0x1550
>> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
>> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
>> [ 7497.041168]  schedule+0x27/0xf0
>> [ 7497.041171]  io_schedule+0x46/0x70
>> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
>> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
>> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
>> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
>> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
>> [ 7497.041207]  evict+0x1b0/0x1d0
>> [ 7497.041212]  __dentry_kill+0x6e/0x170
>> [ 7497.041216]  dput+0xe5/0x1b0
>> [ 7497.041218]  do_renameat2+0x386/0x600
>> [ 7497.041226]  __x64_sys_rename+0x43/0x50
>> [ 7497.041229]  do_syscall_64+0xb7/0x200
>> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>> [ 7497.041236] RIP: 0033:0x7f4be586f75b
>> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
>> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
>> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
>> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
>> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
>> [ 7497.041277]  </TASK>
>> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
>> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
>> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.063140] Call Trace:
>> [ 7497.063141]  <TASK>
>> [ 7497.063145]  __schedule+0x3fa/0x1550
>> [ 7497.063154]  schedule+0x27/0xf0
>> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
> 
> From code review, the counter for the bit reaches COUNTER_MAX, means
> there are already lots of write IO issued in the range represented by
> this bit. And md_bitmap_startwrite() is waiting for such IO to be done
> to issue new IO. Hence either IO is handered too slow or deadlock is
> triggered.
> 
> So for the next step, can you do the following test for problem
> identificatioin?
> 
> 1) With the hangtaks, are the underlying disks idle?(By iostat). And can
> you please collect /sys/block/[disk]/inflight for both raid and
> underlying disks.

I will try that.

> 2) Can you still reporduce the problem with raid1/raid10?
> 3) Can you still reporduce the problem with bitmap disabled? By adding
> md-bitmap=none while creating the array.

Here’s the part where debugging is limited: I do have valuable data on this machine. If we can get around re-creating the array that would be great. Otherwise I’ll have to move the data to a different host - this can be done but it will take some time.

I could take the hot-spare out of the cluster and do something with it, but I guess that doesn’t give you any insight, right?

I’ll get back to you with the information from question 1.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08  7:06               ` Christian Theune
@ 2024-08-08  8:53                 ` Christian Theune
  2024-08-09  1:13                   ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-08  8:53 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 8. Aug 2024, at 09:06, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> 1) With the hangtaks, are the underlying disks idle?(By iostat). And can
>> you please collect /sys/block/[disk]/inflight for both raid and
>> underlying disks.

The underlying disks are completely idle, the md reports 100% utilization:

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-3             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-4             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
md127            0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
nvme0n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme10n1         0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme11n1         0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme1n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme2n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme3n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme4n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme7n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme8n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme9n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
sda              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00

Inflight seems 0:

$ cat /sys/block/nvme*/inflight
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0
       0        0

The md has 348 inflight:

/sys/block/md127
       0      348

Does this help for now?

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08  6:02           ` Christian Theune
  2024-08-08  6:55             ` Yu Kuai
@ 2024-08-08 14:23             ` John Stoffel
  2024-08-19 19:12               ` tihmstar
  1 sibling, 1 reply; 88+ messages in thread
From: John Stoffel @ 2024-08-08 14:23 UTC (permalink / raw)
  To: Christian Theune
  Cc: John Stoffel, Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:

> Hi,
>> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
>> 
>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>> 
>> 
>> 
>>> i had some more time at hand and managent to compile 5.15.164. The
>>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>>> reproduce this on a newer supported kernel if I can.
>> 
>> Supported by who?   NixOS?  Why don't you just install linux kernel
>> 6.6.x and see of the problem is still there?  5.15.x is ancient and
>> un-supported upstream now.  

> I did just that. However, 5.15 “un-supported” by upstream is
> confusing me. It’s an official LTS kernel with an EOL of December
> 2026.

To quote the kernel.org:

     Longterm

     There are usually several "longterm maintenance" kernel releases
     provided for the purposes of backporting bugfixes for older kernel
     trees. Only important bugfixes are applied to such kernels and they
     don't usually see very frequent releases, especially for older trees.

So when we run into people having problems with LTS kernels, the first
thing we ask is for them to run the most recent kernels, because
that's where the bug fixing happens.  

In any case, there have been some bugs in recent RAID5/RAID6 setups,
so going to a recent kernel will help track these down. 


> Also, I’d like to note that NixOS kernels tend to be very close to
> upstream. The only patches that I can see are involved here are
> those that patch out some hard coded references to user space paths:

Not in this case, kernel 5.15 is ancient, you should be running 6.9.x
or even newer for debugging issues like this. 

> https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch

> Kernel is now:

> Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux


> The issue is still there on 6.10.3 and now looks like shown below.

Great!  Thanks for doing this change, this will let the developers
help you more easily. 

> I’m aware that this is output that shows symptoms and not
> (necessarily) the cause. I’m currently a bit out of ideas where to
> look for more information and would appreciate any pointers. My
> suspicion is an interaction problem triggered by the use of NVMe in
> combination with other systems (xfs, dm-crypt and raid are the ones
> I’m aware of playign a role).

> The use of NVMe itself likely isn’t the issue (we’ve been using NVMe
> on similar hosts and also in combination with dm-crypt with this
> kernel for a while now) and I could imagine that it triggers a race
> condition due to the higher performance - although the specific
> performance parameters aren't *that* high. Right before the lockup I
> see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe
> with dm-crypt but not with raid before.

> I can perform debugging on that machine as needed, but googling for
> any combination of hung tasks related to nvme/xfs/crypt/raid only
> ends up showing me generic performance concerns from forum, an
> unrelated xfs issue mentioned by redhat and the list archive entry
> from this post.

Can you try setting up some loop devices in the same type of
configuration, and seeing if you can replicate the issue that way?
Let's try to get the nvme stuff out of the way to see if this can be
replicated more easily.  


> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
> [ 7497.040979] Call Trace:
> [ 7497.040981]  <TASK>
> [ 7497.040987]  __schedule+0x3fa/0x1550
> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
> [ 7497.041168]  schedule+0x27/0xf0
> [ 7497.041171]  io_schedule+0x46/0x70
> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
> [ 7497.041207]  evict+0x1b0/0x1d0
> [ 7497.041212]  __dentry_kill+0x6e/0x170
> [ 7497.041216]  dput+0xe5/0x1b0
> [ 7497.041218]  do_renameat2+0x386/0x600
> [ 7497.041226]  __x64_sys_rename+0x43/0x50
> [ 7497.041229]  do_syscall_64+0xb7/0x200
> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [ 7497.041236] RIP: 0033:0x7f4be586f75b
> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
> [ 7497.041277]  </TASK>
> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.063140] Call Trace:
> [ 7497.063141]  <TASK>
> [ 7497.063145]  __schedule+0x3fa/0x1550
> [ 7497.063154]  schedule+0x27/0xf0
> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.063184]  ? bio_split_rw+0x193/0x260
> [ 7497.063190]  md_handle_request+0x153/0x270
> [ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.063198]  __submit_bio+0x190/0x240
> [ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.063214]  process_one_work+0x18f/0x3b0
> [ 7497.063219]  worker_thread+0x233/0x340
> [ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.063225]  kthread+0xcd/0x100
> [ 7497.063228]  ? __pfx_kthread+0x10/0x10
> [ 7497.063230]  ret_from_fork+0x31/0x50
> [ 7497.063234]  ? __pfx_kthread+0x10/0x10
> [ 7497.063236]  ret_from_fork_asm+0x1a/0x30
> [ 7497.063243]  </TASK>
> [ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
> [ 7497.071367]       Not tainted 6.10.3 #1-NixOS
> [ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
> [ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.085086] Call Trace:
> [ 7497.085087]  <TASK>
> [ 7497.085089]  __schedule+0x3fa/0x1550
> [ 7497.085094]  schedule+0x27/0xf0
> [ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.085116]  ? bio_split_rw+0x193/0x260
> [ 7497.085120]  md_handle_request+0x153/0x270
> [ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.085125]  __submit_bio+0x190/0x240
> [ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.085138]  process_one_work+0x18f/0x3b0
> [ 7497.085142]  worker_thread+0x233/0x340
> [ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.085150]  kthread+0xcd/0x100
> [ 7497.085152]  ? __pfx_kthread+0x10/0x10
> [ 7497.085155]  ret_from_fork+0x31/0x50
> [ 7497.085157]  ? __pfx_kthread+0x10/0x10
> [ 7497.085159]  ret_from_fork_asm+0x1a/0x30
> [ 7497.085164]  </TASK>
> [ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
> [ 7497.093282]       Not tainted 6.10.3 #1-NixOS
> [ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
> [ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.106998] Call Trace:
> [ 7497.106999]  <TASK>
> [ 7497.107001]  __schedule+0x3fa/0x1550
> [ 7497.107006]  schedule+0x27/0xf0
> [ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.107028]  ? bio_split_rw+0x193/0x260
> [ 7497.107033]  md_handle_request+0x153/0x270
> [ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.107039]  __submit_bio+0x190/0x240
> [ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.107052]  process_one_work+0x18f/0x3b0
> [ 7497.107055]  worker_thread+0x233/0x340
> [ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.107063]  kthread+0xcd/0x100
> [ 7497.107065]  ? __pfx_kthread+0x10/0x10
> [ 7497.107067]  ret_from_fork+0x31/0x50
> [ 7497.107069]  ? __pfx_kthread+0x10/0x10
> [ 7497.107071]  ret_from_fork_asm+0x1a/0x30
> [ 7497.107081]  </TASK>
> [ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
> [ 7497.114327]       Not tainted 6.10.3 #1-NixOS
> [ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
> [ 7497.128024] Call Trace:
> [ 7497.128025]  <TASK>
> [ 7497.128027]  __schedule+0x3fa/0x1550
> [ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.128034]  schedule+0x27/0xf0
> [ 7497.128036]  schedule_timeout+0x15d/0x170
> [ 7497.128040]  __down_common+0x119/0x220
> [ 7497.128045]  down+0x47/0x60
> [ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
> [ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
> [ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
> [ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
> [ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
> [ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
> [ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
> [ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
> [ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
> [ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
> [ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
> [ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
> [ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
> [ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129003]  ? get_cached_acl+0x4c/0x90
> [ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
> [ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.129065]  path_openat+0xf82/0x1240
> [ 7497.129072]  do_filp_open+0xc4/0x170
> [ 7497.129084]  do_sys_openat2+0xab/0xe0
> [ 7497.129090]  __x64_sys_openat+0x57/0xa0
> [ 7497.129093]  do_syscall_64+0xb7/0x200
> [ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [ 7497.129099] RIP: 0033:0x7f6809d2be2f
> [ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> [ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
> [ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
> [ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
> [ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
> [ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
> [ 7497.129133]  </TASK>
> [ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
> [ 7497.137277]       Not tainted 6.10.3 #1-NixOS
> [ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
> [ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
> [ 7497.150993] Call Trace:
> [ 7497.150995]  <TASK>
> [ 7497.150998]  __schedule+0x3fa/0x1550
> [ 7497.151007]  schedule+0x27/0xf0
> [ 7497.151009]  schedule_timeout+0x15d/0x170
> [ 7497.151013]  __wait_for_common+0x90/0x1c0
> [ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
> [ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
> [ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
> [ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
> [ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
> [ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
> [ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
> [ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
> [ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
> [ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
> [ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
> [ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
> [ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
> [ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
> [ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
> [ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
> [ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
> [ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
> [ 7497.152228]  iomap_writepages+0x271/0x9b0
> [ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
> [ 7497.152287]  do_writepages+0x76/0x260
> [ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
> [ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152300]  ? psi_group_change+0x213/0x3c0
> [ 7497.152305]  __writeback_single_inode+0x3d/0x350
> [ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
> [ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
> [ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.152328]  wb_writeback+0x193/0x310
> [ 7497.152332]  wb_workfn+0x357/0x450
> [ 7497.152337]  process_one_work+0x18f/0x3b0
> [ 7497.152342]  worker_thread+0x233/0x340
> [ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.152348]  kthread+0xcd/0x100
> [ 7497.152352]  ? __pfx_kthread+0x10/0x10
> [ 7497.152354]  ret_from_fork+0x31/0x50
> [ 7497.152358]  ? __pfx_kthread+0x10/0x10
> [ 7497.152360]  ret_from_fork_asm+0x1a/0x30
> [ 7497.152366]  </TASK>
> [ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
> [ 7497.160489]       Not tainted 6.10.3 #1-NixOS
> [ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
> [ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.174200] Call Trace:
> [ 7497.174201]  <TASK>
> [ 7497.174203]  __schedule+0x3fa/0x1550
> [ 7497.174208]  schedule+0x27/0xf0
> [ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.174235]  ? bio_split_rw+0x193/0x260
> [ 7497.174242]  md_handle_request+0x153/0x270
> [ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.174248]  __submit_bio+0x190/0x240
> [ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.174263]  process_one_work+0x18f/0x3b0
> [ 7497.174266]  worker_thread+0x233/0x340
> [ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.174271]  kthread+0xcd/0x100
> [ 7497.174273]  ? __pfx_kthread+0x10/0x10
> [ 7497.174276]  ret_from_fork+0x31/0x50
> [ 7497.174277]  ? __pfx_kthread+0x10/0x10
> [ 7497.174279]  ret_from_fork_asm+0x1a/0x30
> [ 7497.174285]  </TASK>
> [ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
> [ 7497.182499]       Not tainted 6.10.3 #1-NixOS
> [ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
> [ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
> [ 7497.196281] Call Trace:
> [ 7497.196282]  <TASK>
> [ 7497.196285]  __schedule+0x3fa/0x1550
> [ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.196293]  schedule+0x27/0xf0
> [ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
> [ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
> [ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
> [ 7497.196400]  xlog_write+0x310/0x470 [xfs]
> [ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
> [ 7497.196503]  process_one_work+0x18f/0x3b0
> [ 7497.196507]  worker_thread+0x233/0x340
> [ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.196515]  kthread+0xcd/0x100
> [ 7497.196517]  ? __pfx_kthread+0x10/0x10
> [ 7497.196519]  ret_from_fork+0x31/0x50
> [ 7497.196522]  ? __pfx_kthread+0x10/0x10
> [ 7497.196524]  ret_from_fork_asm+0x1a/0x30
> [ 7497.196529]  </TASK>
> [ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
> [ 7497.204648]       Not tainted 6.10.3 #1-NixOS
> [ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
> [ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.218359] Call Trace:
> [ 7497.218360]  <TASK>
> [ 7497.218363]  __schedule+0x3fa/0x1550
> [ 7497.218369]  schedule+0x27/0xf0
> [ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.218392]  ? bio_split_rw+0x193/0x260
> [ 7497.218398]  md_handle_request+0x153/0x270
> [ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.218405]  __submit_bio+0x190/0x240
> [ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.218419]  process_one_work+0x18f/0x3b0
> [ 7497.218423]  worker_thread+0x233/0x340
> [ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.218428]  kthread+0xcd/0x100
> [ 7497.218430]  ? __pfx_kthread+0x10/0x10
> [ 7497.218433]  ret_from_fork+0x31/0x50
> [ 7497.218435]  ? __pfx_kthread+0x10/0x10
> [ 7497.218437]  ret_from_fork_asm+0x1a/0x30
> [ 7497.218442]  </TASK>
> [ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
> [ 7497.226572]       Not tainted 6.10.3 #1-NixOS
> [ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
> [ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
> [ 7497.240287] Call Trace:
> [ 7497.240288]  <TASK>
> [ 7497.240290]  __schedule+0x3fa/0x1550
> [ 7497.240298]  schedule+0x27/0xf0
> [ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
> [ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
> [ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
> [ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
> [ 7497.240322]  ? bio_split_rw+0x193/0x260
> [ 7497.240328]  md_handle_request+0x153/0x270
> [ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.240334]  __submit_bio+0x190/0x240
> [ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
> [ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
> [ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [ 7497.240348]  process_one_work+0x18f/0x3b0
> [ 7497.240353]  worker_thread+0x233/0x340
> [ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
> [ 7497.240358]  kthread+0xcd/0x100
> [ 7497.240361]  ? __pfx_kthread+0x10/0x10
> [ 7497.240364]  ret_from_fork+0x31/0x50
> [ 7497.240366]  ? __pfx_kthread+0x10/0x10
> [ 7497.240368]  ret_from_fork_asm+0x1a/0x30
> [ 7497.240375]  </TASK>
> [ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings

>> 
>> 
>> 
>>> Kernel:
>> 
>>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
>> 
>>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
>> 
>>> Output:
>> 
>>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.860435] Call Trace:
>>> [ 4549.860437]  <TASK>
>>> [ 4549.860440]  __schedule+0x373/0x1580
>>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>>> [ 4549.860453]  schedule+0x5b/0xe0
>>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.860459]  ? finish_wait+0x90/0x90
>>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.860490]  ? finish_wait+0x90/0x90
>>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>>> [ 4549.860502]  __submit_bio+0x18c/0x220
>>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.860517]  process_one_work+0x1d3/0x360
>>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>>> [ 4549.860523]  ? process_one_work+0x360/0x360
>>> [ 4549.860525]  kthread+0x115/0x140
>>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>>> [ 4549.860535]  </TASK>
>>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.882368] Call Trace:
>>> [ 4549.882369]  <TASK>
>>> [ 4549.882370]  __schedule+0x373/0x1580
>>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.882379]  schedule+0x5b/0xe0
>>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.882384]  ? finish_wait+0x90/0x90
>>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.882403]  ? finish_wait+0x90/0x90
>>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>>> [ 4549.882415]  __submit_bio+0x18c/0x220
>>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.882428]  process_one_work+0x1d3/0x360
>>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>>> [ 4549.882433]  ? process_one_work+0x360/0x360
>>> [ 4549.882435]  kthread+0x115/0x140
>>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>>> [ 4549.882442]  </TASK>
>>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>>> [ 4549.904411] Call Trace:
>>> [ 4549.904412]  <TASK>
>>> [ 4549.904414]  __schedule+0x373/0x1580
>>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>>> [ 4549.904498]  schedule+0x5b/0xe0
>>> [ 4549.904500]  io_schedule+0x42/0x70
>>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>>> [ 4549.904520]  evict+0x15f/0x180
>>> [ 4549.904524]  __dentry_kill+0xde/0x170
>>> [ 4549.904527]  dput+0x139/0x320
>>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>>> [ 4549.904538]  do_syscall_64+0x34/0x80
>>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>>> [ 4549.904555]  </TASK>
>>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.926380] Call Trace:
>>> [ 4549.926381]  <TASK>
>>> [ 4549.926383]  __schedule+0x373/0x1580
>>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.926392]  schedule+0x5b/0xe0
>>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.926397]  ? finish_wait+0x90/0x90
>>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4549.926413]  ? finish_wait+0x90/0x90
>>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>>> [ 4549.926424]  __submit_bio+0x18c/0x220
>>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.926437]  process_one_work+0x1d3/0x360
>>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>>> [ 4549.926442]  ? process_one_work+0x360/0x360
>>> [ 4549.926444]  kthread+0x115/0x140
>>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>>> [ 4549.926454]  </TASK>
>>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>>> [ 4549.947503] Call Trace:
>>> [ 4549.947505]  <TASK>
>>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>>> [ 4549.947510]  __schedule+0x373/0x1580
>>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>>> [ 4549.947521]  schedule+0x5b/0xe0
>>> [ 4549.947523]  schedule_timeout+0xff/0x130
>>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>>> [ 4549.948357]  do_syscall_64+0x34/0x80
>>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>>> [ 4549.948373]  </TASK>
>>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.970205] Call Trace:
>>> [ 4549.970206]  <TASK>
>>> [ 4549.970209]  __schedule+0x373/0x1580
>>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970215]  schedule+0x5b/0xe0
>>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.970219]  ? finish_wait+0x90/0x90
>>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.970244]  ? finish_wait+0x90/0x90
>>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>>> [ 4549.970254]  __submit_bio+0x18c/0x220
>>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.970267]  process_one_work+0x1d3/0x360
>>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>>> [ 4549.970272]  ? process_one_work+0x360/0x360
>>> [ 4549.970274]  kthread+0x115/0x140
>>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>>> [ 4549.970282]  </TASK>
>>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4549.992097] Call Trace:
>>> [ 4549.992098]  <TASK>
>>> [ 4549.992100]  __schedule+0x373/0x1580
>>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4549.992109]  schedule+0x5b/0xe0
>>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4549.992114]  ? finish_wait+0x90/0x90
>>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>>> [ 4549.992135]  ? finish_wait+0x90/0x90
>>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>>> [ 4549.992144]  __submit_bio+0x18c/0x220
>>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4549.992157]  process_one_work+0x1d3/0x360
>>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>>> [ 4549.992162]  ? process_one_work+0x360/0x360
>>> [ 4549.992163]  kthread+0x115/0x140
>>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>>> [ 4549.992172]  </TASK>
>>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.013992] Call Trace:
>>> [ 4550.013993]  <TASK>
>>> [ 4550.013995]  __schedule+0x373/0x1580
>>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.014003]  schedule+0x5b/0xe0
>>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.014008]  ? finish_wait+0x90/0x90
>>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.014022]  ? finish_wait+0x90/0x90
>>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>>> [ 4550.014032]  __submit_bio+0x18c/0x220
>>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.014044]  process_one_work+0x1d3/0x360
>>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>>> [ 4550.014049]  ? process_one_work+0x360/0x360
>>> [ 4550.014050]  kthread+0x115/0x140
>>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>>> [ 4550.014058]  </TASK>
>>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.035887] Call Trace:
>>> [ 4550.035888]  <TASK>
>>> [ 4550.035890]  __schedule+0x373/0x1580
>>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.035899]  schedule+0x5b/0xe0
>>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.035904]  ? finish_wait+0x90/0x90
>>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.035919]  ? finish_wait+0x90/0x90
>>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>>> [ 4550.035929]  __submit_bio+0x18c/0x220
>>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.035942]  process_one_work+0x1d3/0x360
>>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>>> [ 4550.035948]  ? process_one_work+0x360/0x360
>>> [ 4550.035949]  kthread+0x115/0x140
>>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>>> [ 4550.035957]  </TASK>
>>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>> [ 4550.057794] Call Trace:
>>> [ 4550.057796]  <TASK>
>>> [ 4550.057798]  __schedule+0x373/0x1580
>>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>> [ 4550.057806]  schedule+0x5b/0xe0
>>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>>> [ 4550.057810]  ? finish_wait+0x90/0x90
>>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>>> [ 4550.057824]  ? finish_wait+0x90/0x90
>>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>>> [ 4550.057834]  __submit_bio+0x18c/0x220
>>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>> [ 4550.057846]  process_one_work+0x1d3/0x360
>>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>>> [ 4550.057850]  ? process_one_work+0x360/0x360
>>> [ 4550.057852]  kthread+0x115/0x140
>>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>>> [ 4550.057860]  </TASK>
>> 
>> 
>>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>> 
>>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> Sure,
>>>>> 
>>>>> would you prefer me testing on 5.15.x or something else?
>>>>> 
>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>>> Kernel:
>>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>> 
>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>> kernel release first.
>>>>> 
>>>>> Thanks,
>>>>> Kuai
>>>>> 
>>>>>>> See the kernel config attached.
>>>>> 
>>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>>> 
>>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>> 
>>> Liebe Grüße,
>>> Christian Theune
>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
>> 

> Liebe Grüße,
> Christian Theune

> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08  8:53                 ` Christian Theune
@ 2024-08-09  1:13                   ` Yu Kuai
  2024-08-09  6:10                     ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-08-09  1:13 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/08/08 16:53, Christian Theune 写道:
> Hi,
> 
>> On 8. Aug 2024, at 09:06, Christian Theune <ct@flyingcircus.io> wrote:
>>>
>>> 1) With the hangtaks, are the underlying disks idle?(By iostat). And can
>>> you please collect /sys/block/[disk]/inflight for both raid and
>>> underlying disks.
> 
> The underlying disks are completely idle, the md reports 100% utilization:
> 
> Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
> dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
> dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> dm-3             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> dm-4             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
> md127            0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00 100.00
> nvme0n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme10n1         0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme11n1         0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme1n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme2n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme3n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme4n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme6n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme7n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme8n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> nvme9n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> sda              0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
> 
> Inflight seems 0:
> 
> $ cat /sys/block/nvme*/inflight
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
>         0        0
> 
> The md has 348 inflight:
> 
> /sys/block/md127
>         0      348
> 
> Does this help for now?

Yes, for sure IO are stuck in md127 and never get dispatched to nvme,
for now I'll say this is a raid5 problem.

Can you describe in steps how do you reporduce this problem? We must
figure out where are those IO and why they're not dispatched. I'll be
easier to debug if I can reporduce it. Otherwise I'll have to give you
a debug patch.

Thanks,
Kuai

> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-09  1:13                   ` Yu Kuai
@ 2024-08-09  6:10                     ` Christian Theune
  2024-08-09 22:51                       ` John Stoffel
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-09  6:10 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 9. Aug 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> 
> Yes, for sure IO are stuck in md127 and never get dispatched to nvme,
> for now I'll say this is a raid5 problem.

Note, that this is raid6, not raid5! Sorry, I never explicitly mentioned that and it was buried in the mdstat output.

Not sure whether that code is internally the same anyway…

> Can you describe in steps how do you reporduce this problem? We must
> figure out where are those IO and why they're not dispatched. I'll be
> easier to debug if I can reporduce it. Otherwise I'll have to give you
> a debug patch.

My workload is pretty simple IMHO:

1. This is on an XFS volume:

meta-data=/dev/mapper/backy      isize=512    agcount=112, agsize=268435328 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1
data     =                       bsize=4096   blocks=30001587200, imaxpct=1
         =                       sunit=128    swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

2. I’m rsyncing a large number of files using:

rsync -avz “<remote-machine>:/<remote-folder>" .

Specific aspects of the data that might come into play:

After 5-6 tries of syncing and locking up, I now have a directory on that volume that contains 154.820 files directly and 1.828.933 files recursively. The recursive structure uses a /year/<hash prefix>/<hash>-filename structure to spread out the 1.8 million files more evenly.

The total amount of data is 1.0 TiB at the moment. The smallest files are empty or just a few kilobytes. The mode of the filesizes is 180.984 bytes.

There is a second workload on the machine that has been running smoothly for a few weeks which backs up virtual machine images and creates a somewhat similarly (but more consistent) hashed structure but doesn’t use rsync but a more complex diff + content hashing + compression approach. However, that other tool does emit write barriers here and there -wWhich rsync doesn’t AFAIK.

I double checked, but the other workload was not active during my last lockup.

Happy to apply a debug patch to help diagnosing this.

Cheers,
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-09  6:10                     ` Christian Theune
@ 2024-08-09 22:51                       ` John Stoffel
  2024-08-12  6:58                         ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: John Stoffel @ 2024-08-09 22:51 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:

> Hi,
>> On 9. Aug 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> 
>> Yes, for sure IO are stuck in md127 and never get dispatched to nvme,
>> for now I'll say this is a raid5 problem.

> Note, that this is raid6, not raid5! Sorry, I never explicitly
> mentioned that and it was buried in the mdstat output.

That's good info.  

I wonder if you could setup some loop devices, build a RAID6 array,
put XFS on it and try to replicate the problem by rsyncing a bunch of
files. 



> Not sure whether that code is internally the same anyway…

>> Can you describe in steps how do you reporduce this problem? We must
>> figure out where are those IO and why they're not dispatched. I'll be
>> easier to debug if I can reporduce it. Otherwise I'll have to give you
>> a debug patch.

> My workload is pretty simple IMHO:

> 1. This is on an XFS volume:

> meta-data=/dev/mapper/backy      isize=512    agcount=112, agsize=268435328 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=1, rmapbt=0
>          =                       reflink=1    bigtime=1
> data     =                       bsize=4096   blocks=30001587200, imaxpct=1
>          =                       sunit=128    swidth=1024 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

> 2. I’m rsyncing a large number of files using:

> rsync -avz “<remote-machine>:/<remote-folder>" .

> Specific aspects of the data that might come into play:

> After 5-6 tries of syncing and locking up, I now have a directory on that volume that contains 154.820 files directly and 1.828.933 files recursively. The recursive structure uses a /year/<hash prefix>/<hash>-filename structure to spread out the 1.8 million files more evenly.

> The total amount of data is 1.0 TiB at the moment. The smallest files are empty or just a few kilobytes. The mode of the filesizes is 180.984 bytes.

> There is a second workload on the machine that has been running smoothly for a few weeks which backs up virtual machine images and creates a somewhat similarly (but more consistent) hashed structure but doesn’t use rsync but a more complex diff + content hashing + compression approach. However, that other tool does emit write barriers here and there -wWhich rsync doesn’t AFAIK.

> I double checked, but the other workload was not active during my last lockup.

> Happy to apply a debug patch to help diagnosing this.

> Cheers,
> Christian

> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-09 22:51                       ` John Stoffel
@ 2024-08-12  6:58                         ` Christian Theune
  2024-08-12 18:37                           ` John Stoffel
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-12  6:58 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi John,
Hi Yu,

> On 10. Aug 2024, at 00:51, John Stoffel <john@stoffel.org> wrote:
> 
>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
> 
>> Hi,
>>> On 9. Aug 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> 
>>> 
>>> Yes, for sure IO are stuck in md127 and never get dispatched to nvme,
>>> for now I'll say this is a raid5 problem.
> 
>> Note, that this is raid6, not raid5! Sorry, I never explicitly
>> mentioned that and it was buried in the mdstat output.
> 
> That's good info.  
> 
> I wonder if you could setup some loop devices, build a RAID6 array,
> put XFS on it and try to replicate the problem by rsyncing a bunch of
> files. 

I was about to try this, but I’m wondering what backing devices you had in mind here? If I place images for loop on the original (defective) RAID 6 setup then this wouldn’t give us much info.

However, I could take the hot spare and run a sequence of tests against that, first with a newer and potentially with an older kernel if it doesn’t reproduce in its final form:

- xfs directly on the nvme drive
- xfs on encrypted nvme drive
- xfs on raid 1 on nvme drive, split into two partitions
- xfs on raid 5 on nvme drive, split into a few partitions
- xfs on raid 6 on nvme drive, split into a few partitions
- repeat the tests with raid1/5//6 with encrypted partitions

As that will take some time and effort, I’d like to double check whether that sounds sensible to you as well?

Cheers,
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-12  6:58                         ` Christian Theune
@ 2024-08-12 18:37                           ` John Stoffel
  2024-08-14  8:53                             ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: John Stoffel @ 2024-08-12 18:37 UTC (permalink / raw)
  To: Christian Theune
  Cc: John Stoffel, Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:

> Hi John,
> Hi Yu,

>> On 10. Aug 2024, at 00:51, John Stoffel <john@stoffel.org> wrote:
>> 
>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>> 
>>> Hi,
>>>> On 9. Aug 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>> 
>>>> 
>>>> Yes, for sure IO are stuck in md127 and never get dispatched to nvme,
>>>> for now I'll say this is a raid5 problem.
>> 
>>> Note, that this is raid6, not raid5! Sorry, I never explicitly
>>> mentioned that and it was buried in the mdstat output.
>> 
>> That's good info.  
>> 
>> I wonder if you could setup some loop devices, build a RAID6 array,
>> put XFS on it and try to replicate the problem by rsyncing a bunch of
>> files. 

> I was about to try this, but I’m wondering what backing devices you
> had in mind here? If I place images for loop on the original
> (defective) RAID 6 setup then this wouldn’t give us much info.

Just try it in RAM at first, if you can make it work.  Or put the
files in /tmp which should be a tmpfs filesystem backed by swap.   

> However, I could take the hot spare and run a sequence of tests
> against that, first with a newer and potentially with an older
> kernel if it doesn’t reproduce in its final form:

That's one option of course.  

> - xfs directly on the nvme drive
> - xfs on encrypted nvme drive
> - xfs on raid 1 on nvme drive, split into two partitions
> - xfs on raid 5 on nvme drive, split into a few partitions
> - xfs on raid 6 on nvme drive, split into a few partitions
> - repeat the tests with raid1/5//6 with encrypted partitions

That's an awesome test setup to run through and might take a bunch of
time.  

> As that will take some time and effort, I’d like to double check
> whether that sounds sensible to you as well?

I'd probably just do the RAID6 tests first, get them out of the way.  

> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-12 18:37                           ` John Stoffel
@ 2024-08-14  8:53                             ` Christian Theune
  2024-08-15  6:19                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-14  8:53 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
> 
> Just try it in RAM at first, if you can make it work.  Or put the
> files in /tmp which should be a tmpfs filesystem backed by swap.   

Thanks, I kinda expected that, but wasn’t sure. I’ll see whether I can make this trigger (reliably) before I run out of RAM. If I do, I’ll have to see. 

>> However, I could take the hot spare and run a sequence of tests
>> against that, first with a newer and potentially with an older
>> kernel if it doesn’t reproduce in its final form:
> 
> That's one option of course.  
> 
>> - xfs directly on the nvme drive
>> - xfs on encrypted nvme drive
>> - xfs on raid 1 on nvme drive, split into two partitions
>> - xfs on raid 5 on nvme drive, split into a few partitions
>> - xfs on raid 6 on nvme drive, split into a few partitions
>> - repeat the tests with raid1/5//6 with encrypted partitions
> 
> That's an awesome test setup to run through and might take a bunch of
> time.  

I’ll try to prepare scripts to set this up. I currently have a specific fileset that I’m using here, I hope I can replicate it with a different fileset at some point … -_-

>> As that will take some time and effort, I’d like to double check
>> whether that sounds sensible to you as well?
> 
> I'd probably just do the RAID6 tests first, get them out of the way.  

Alright, those are running right now - I’ll let you know what happens.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-14  8:53                             ` Christian Theune
@ 2024-08-15  6:19                               ` Christian Theune
  2024-08-15 10:03                                 ` Christian Theune
  2024-08-15 15:53                                 ` John Stoffel
  0 siblings, 2 replies; 88+ messages in thread
From: Christian Theune @ 2024-08-15  6:19 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>> 
>> I'd probably just do the RAID6 tests first, get them out of the way.  
> 
> Alright, those are running right now - I’ll let you know what happens.

I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(

I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that 
portion.

On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.

@Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15  6:19                               ` Christian Theune
@ 2024-08-15 10:03                                 ` Christian Theune
  2024-08-15 11:14                                   ` Yu Kuai
  2024-08-15 15:53                                 ` John Stoffel
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-15 10:03 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.

I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help? 

It sounds like the specific pattern relies on XFS doing a specific thing there … 

Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?

I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
 
Christian

> On 15. Aug 2024, at 08:19, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>> 
>>> I'd probably just do the RAID6 tests first, get them out of the way.  
>> 
>> Alright, those are running right now - I’ll let you know what happens.
> 
> I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(
> 
> I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that 
> portion.
> 
> On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.
> 
> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
> 
> Christian
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 10:03                                 ` Christian Theune
@ 2024-08-15 11:14                                   ` Yu Kuai
  2024-08-15 11:24                                     ` Christian Theune
  2024-10-22 15:02                                     ` Christian Theune
  0 siblings, 2 replies; 88+ messages in thread
From: Yu Kuai @ 2024-08-15 11:14 UTC (permalink / raw)
  To: Christian Theune, John Stoffel
  Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/08/15 18:03, Christian Theune 写道:
> Hi,
> 
> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
> 
> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
> 
> It sounds like the specific pattern relies on XFS doing a specific thing there …
> 
> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
> 
> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?

That sounds greate.
>   
> Christian
> 
>> On 15. Aug 2024, at 08:19, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Hi,
>>
>>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>>>
>>> Hi,
>>>
>>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>>>
>>>> I'd probably just do the RAID6 tests first, get them out of the way.
>>>
>>> Alright, those are running right now - I’ll let you know what happens.
>>
>> I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(
>>
>> I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that
>> portion.
>>
>> On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.
>>
>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?

Yes, however, I still need some time to sort out the internal process of
raid5. I'm quite busy with some other work stuff and I'm familiar with
raid1/10, but not too much about raid5. :(

Main idea is to figure out why IO are not dispatched to underlying
disks.

Thanks,
Kuai

>>
>> Christian
>>
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 11:14                                   ` Yu Kuai
@ 2024-08-15 11:24                                     ` Christian Theune
  2024-08-15 11:49                                       ` Yu Kuai
  2024-10-22 15:02                                     ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-15 11:24 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 15. Aug 2024, at 13:14, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/08/15 18:03, Christian Theune 写道:
>> Hi,
>> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
>> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
>> It sounds like the specific pattern relies on XFS doing a specific thing there …
>> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
>> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
> 
> That sounds greate.

Alright. I will try that.

>>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
> 
> Yes, however, I still need some time to sort out the internal process of
> raid5. I'm quite busy with some other work stuff and I'm familiar with
> raid1/10, but not too much about raid5. :(
> 
> Main idea is to figure out why IO are not dispatched to underlying
> disks.

Sure, thanks - I’m happy to be patient. :)

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 11:24                                     ` Christian Theune
@ 2024-08-15 11:49                                       ` Yu Kuai
  0 siblings, 0 replies; 88+ messages in thread
From: Yu Kuai @ 2024-08-15 11:49 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/08/15 19:24, Christian Theune 写道:
> Hi,
> 
>> On 15. Aug 2024, at 13:14, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/08/15 18:03, Christian Theune 写道:
>>> Hi,
>>> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
>>> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
>>> It sounds like the specific pattern relies on XFS doing a specific thing there …
>>> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
>>> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
>>
>> That sounds greate.
> 
> Alright. I will try that.
> 
>>>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
>>
>> Yes, however, I still need some time to sort out the internal process of
>> raid5. I'm quite busy with some other work stuff and I'm familiar with
>> raid1/10, but not too much about raid5. :(
>>
>> Main idea is to figure out why IO are not dispatched to underlying
>> disks.
> 
> Sure, thanks - I’m happy to be patient. :)

Meanwhile, can you try the following patch to bypass bitmap? Let's
see what happens if bitmap counter will not block.

Noted with this patch, the bitmap will not work, and data can be
inconsistent after power failure.

Thanks,
Kuai

diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 0a2d37eb38ef..5ad51e9ad805 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -1463,8 +1463,7 @@ __acquires(bitmap->lock)

  int md_bitmap_startwrite(struct bitmap *bitmap, sector_t offset, 
unsigned long sectors, int behind)
  {
-       if (!bitmap)
-               return 0;
+       return 0;

         if (behind) {
                 int bw;
@@ -1528,8 +1527,8 @@ EXPORT_SYMBOL(md_bitmap_startwrite);
  void md_bitmap_endwrite(struct bitmap *bitmap, sector_t offset,
                         unsigned long sectors, int success, int behind)
  {
-       if (!bitmap)
-               return;
+       return;
+
         if (behind) {
                 if (atomic_dec_and_test(&bitmap->behind_writes))
                         wake_up(&bitmap->behind_wait);

> 
> Christian
> 


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15  6:19                               ` Christian Theune
  2024-08-15 10:03                                 ` Christian Theune
@ 2024-08-15 15:53                                 ` John Stoffel
  2024-08-15 19:13                                   ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: John Stoffel @ 2024-08-15 15:53 UTC (permalink / raw)
  To: Christian Theune
  Cc: John Stoffel, Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:

> Hi,
>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>> 
>>> I'd probably just do the RAID6 tests first, get them out of the way.  
>> 
>> Alright, those are running right now - I’ll let you know what happens.

> I’m not making progress here. I can’t reproduce those on in-memory
> loopback raid 6. However: i can’t fully produce the rsync. For me
> this only triggered after around 1.5hs of progress on the NVMe which
> resulted in the hangup. I can only create around 20 GiB worth of
> raid 6 volume on this machine. I’ve tried running rsync until it
> exhausts the space, deleting the content and running rsync again,
> but I feel like this isn’t suffient to trigger the issue. :(

You're running on the older 5.13.x kernel or the newer 6.x kernel?  

> I’m trying to find whether any specific pattern in the files around
> the time it locks up might be relevant here and try to run the rsync
> over that portion.

That's a good idea.  Do you have highly compressed files which are
maybe exploding in size when put into LUKS encrypted partitions?  Just
a random thought, not a real idea.

> On the plus side, I have a script now that can create the various
> loopback settings quickly, so I can try out things as needed. Not
> that valuable without a reproducer, yet, though.

Yay!  Please share it.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 15:53                                 ` John Stoffel
@ 2024-08-15 19:13                                   ` Christian Theune
  2024-08-26 14:38                                     ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-08-15 19:13 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 15. Aug 2024, at 17:53, John Stoffel <john@stoffel.org> wrote:
> 
>> I’m not making progress here. I can’t reproduce those on in-memory
>> loopback raid 6. However: i can’t fully produce the rsync. For me
>> this only triggered after around 1.5hs of progress on the NVMe which
>> resulted in the hangup. I can only create around 20 GiB worth of
>> raid 6 volume on this machine. I’ve tried running rsync until it
>> exhausts the space, deleting the content and running rsync again,
>> but I feel like this isn’t suffient to trigger the issue. :(
> 
> You're running on the older 5.13.x kernel or the newer 6.x kernel?  

6.10.3. I can reproduce reliably on the stack of dm-crypt/mdraid/xfs, but not on in memory things at the moment. From my perspective, because my test case on there isn’t big enough.

>> I’m trying to find whether any specific pattern in the files around
>> the time it locks up might be relevant here and try to run the rsync
>> over that portion.
> 
> That's a good idea.  Do you have highly compressed files which are
> maybe exploding in size when put into LUKS encrypted partitions?  Just
> a random thought, not a real idea.

Nope. No encryption in the files, also it wasn’t true that I could reproduce on specific portions. When restarting from “fresh” it kept going to different points until it crashed and it needed to churn through around 200-300GiB until it finally caved in.

>> On the plus side, I have a script now that can create the various
>> loopback settings quickly, so I can try out things as needed. Not
>> that valuable without a reproducer, yet, though.
> 
> Yay!  Please share it.

Will do next week after a bit of cleanup.

Thanks for helping so far - I’ll keep pulling this thread until it becomes loose, I hope … ;)

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-08 14:23             ` John Stoffel
@ 2024-08-19 19:12               ` tihmstar
  2024-08-19 21:05                 ` John Stoffel
  0 siblings, 1 reply; 88+ messages in thread
From: tihmstar @ 2024-08-19 19:12 UTC (permalink / raw)
  To: John Stoffel
  Cc: Christian Theune, Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

[-- Attachment #1: Type: text/plain, Size: 1266 bytes --]


Hi,

i think i have the same problem (looks very similar at least).
I updated my kernel yesterday, but before that (a few minor versions earlier) i'm pretty sure i've seen a very similar stacktrace with "md_bitmap_startwrite".

The setup seems to be similar, with some slight differences.
I'm also running a linux RAID6 with LUKS, however i'm running it on SATA HDD, not NVME.
Also i have the LUKS layer on each hdd individually, then the RAID6 is build ontop of the /dev/mapper/-devices of those hdds.
I.e. i have LUKS below the RAID, while Christian has it on top of the RAID.
Directly on the raid i have btrfs (as "single disk", raid is handled by linux-raid).

The "trigger" is similar too, i have one large NAS with 100TB data, which i'm trying to migrate to my new NAS.
"rclone copy --links --local-zero-size-links -P --transfers=16 --sftp-ask-password ....."

After running that for ~30hours (10GB network link), IO on the new NAS is completely stuck.

I attached a few files with info about my setup, which i think might be useful.
I'm happy to help debugging, but i too have data on the new NAS, so rebuilding the RAID isn't an option for me either.

Cheers
tihmstar

PS: second try sending this (never used mailing lists before)


[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 110371 bytes --]

[root@coldnas ~]# dmesg 
[56065.390298] md/raid:md127: read error corrected (8 sectors at 8952913016 on dm-1)
[56068.949917] ata25.00: exception Emask 0x0 SAct 0x79ffffe1 SErr 0x0 action 0x0
[56068.957061] ata25.00: irq_stat 0x40000008
[56068.961099] ata25.00: failed command: READ FPDMA QUEUED
[56068.966335] ata25.00: cmd 60/40:d8:00:9d:a5/05:00:15:02:00/40 tag 27 ncq dma 688128 in
                        res 43/40:40:c0:9d:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56068.982442] ata25.00: status: { DRDY SENSE ERR }
[56068.987078] ata25.00: error: { UNC }
[56069.083162] ata25.00: configured for UDMA/133
[56069.083228] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56069.083232] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
[56069.083234] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
[56069.083236] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d 00 00 00 05 40 00 00
[56069.083238] critical medium error, dev sdc, sector 8953109760 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56069.093669] ata25: EH complete
[56071.267547] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56071.274689] ata25.00: irq_stat 0x40000008
[56071.278714] ata25.00: failed command: READ FPDMA QUEUED
[56071.283952] ata25.00: cmd 60/c0:80:48:aa:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
                        res 43/40:c0:b0:ab:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56071.300082] ata25.00: status: { DRDY SENSE ERR }
[56071.304712] ata25.00: error: { UNC }
[56071.390681] ata25.00: configured for UDMA/133
[56071.390724] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56071.390726] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
[56071.390728] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
[56071.390729] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 aa 48 00 00 02 c0 00 00
[56071.390731] critical medium error, dev sdc, sector 8953113160 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
[56071.401093] ata25: EH complete
[56073.426604] ata25.00: exception Emask 0x0 SAct 0x1000 SErr 0x0 action 0x0
[56073.433402] ata25.00: irq_stat 0x40000008
[56073.437466] ata25.00: failed command: READ FPDMA QUEUED
[56073.442727] ata25.00: cmd 60/08:60:a8:b9:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
                        res 43/40:08:a8:b9:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56073.458694] ata25.00: status: { DRDY SENSE ERR }
[56073.463326] ata25.00: error: { UNC }
[56073.564919] ata25.00: configured for UDMA/133
[56073.564929] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56073.564933] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
[56073.564935] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
[56073.564937] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 b9 a8 00 00 00 08 00 00
[56073.564939] critical medium error, dev sdc, sector 8953117096 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56073.575101] ata25: EH complete
[56074.052895] md/raid:md127: read error corrected (8 sectors at 8953082280 on dm-1)
[56077.233232] ata25.00: exception Emask 0x0 SAct 0x83f7ff02 SErr 0x0 action 0x0
[56077.240378] ata25.00: irq_stat 0x40000008
[56077.244411] ata25.00: failed command: READ FPDMA QUEUED
[56077.249645] ata25.00: cmd 60/40:88:00:d5:a5/05:00:15:02:00/40 tag 17 ncq dma 688128 in
                        res 43/40:40:40:d5:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56077.265768] ata25.00: status: { DRDY SENSE ERR }
[56077.270401] ata25.00: error: { UNC }
[56077.371947] ata25.00: configured for UDMA/133
[56077.371989] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56077.371992] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
[56077.371994] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
[56077.371995] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 00 00 00 05 40 00 00
[56077.371997] critical medium error, dev sdc, sector 8953124096 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56077.382450] ata25: EH complete
[56079.490942] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56079.498082] ata25.00: irq_stat 0x40000008
[56079.502109] ata25.00: failed command: READ FPDMA QUEUED
[56079.507345] ata25.00: cmd 60/c0:80:40:e2:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
                        res 43/40:c0:40:e3:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56079.523453] ata25.00: status: { DRDY SENSE ERR }
[56079.528103] ata25.00: error: { UNC }
[56079.637795] ata25.00: configured for UDMA/133
[56079.637949] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56079.637952] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
[56079.637955] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
[56079.637957] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 e2 40 00 00 02 c0 00 00
[56079.637959] critical medium error, dev sdc, sector 8953127488 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
[56079.648339] ata25: EH complete
[56084.636191] ata25.00: exception Emask 0x0 SAct 0x3fde1800 SErr 0x0 action 0x0
[56084.643341] ata25.00: irq_stat 0x40000008
[56084.647380] ata25.00: failed command: READ FPDMA QUEUED
[56084.652610] ata25.00: cmd 60/08:88:c8:ab:a5/00:00:15:02:00/40 tag 17 ncq dma 4096 in
                        res 43/40:08:c8:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56084.668561] ata25.00: status: { DRDY SENSE ERR }
[56084.673189] ata25.00: error: { UNC }
[56084.760994] ata25.00: configured for UDMA/133
[56084.761009] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56084.761012] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
[56084.761013] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
[56084.761015] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab c8 00 00 00 08 00 00
[56084.761016] critical medium error, dev sdc, sector 8953113544 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56084.771196] ata25: EH complete
[56087.035282] ata25.00: exception Emask 0x0 SAct 0x1ffc SErr 0x0 action 0x0
[56087.042074] ata25.00: irq_stat 0x40000008
[56087.046100] ata25.00: failed command: READ FPDMA QUEUED
[56087.051341] ata25.00: cmd 60/08:10:d0:ab:a5/00:00:15:02:00/40 tag 2 ncq dma 4096 in
                        res 43/40:08:d0:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56087.067187] ata25.00: status: { DRDY SENSE ERR }
[56087.071879] ata25.00: error: { UNC }
[56087.168480] ata25.00: configured for UDMA/133
[56087.168495] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
[56087.168498] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
[56087.168501] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
[56087.168503] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab d0 00 00 00 08 00 00
[56087.168504] critical medium error, dev sdc, sector 8953113552 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56087.178680] ata25: EH complete
[56087.435439] md/raid:md127: read error corrected (8 sectors at 8953078736 on dm-1)
[56087.435437] md/raid:md127: read error corrected (8 sectors at 8953078728 on dm-1)
[56090.450888] ata25.00: exception Emask 0x0 SAct 0x701f9ff SErr 0x0 action 0x0
[56090.457937] ata25.00: irq_stat 0x40000008
[56090.461967] ata25.00: failed command: READ FPDMA QUEUED
[56090.467243] ata25.00: cmd 60/08:00:40:e3:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
                        res 43/40:08:40:e3:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56090.483104] ata25.00: status: { DRDY SENSE ERR }
[56090.487736] ata25.00: error: { UNC }
[56090.708972] ata25.00: configured for UDMA/133
[56090.708989] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56090.708992] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
[56090.708994] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
[56090.708997] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 e3 40 00 00 00 08 00 00
[56090.708998] critical medium error, dev sdc, sector 8953127744 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56090.719219] ata25: EH complete
[56094.801196] ata25.00: exception Emask 0x0 SAct 0x3ff0 SErr 0x0 action 0x0
[56094.808023] ata25.00: irq_stat 0x40000008
[56094.812070] ata25.00: failed command: READ FPDMA QUEUED
[56094.817310] ata25.00: cmd 60/08:20:40:d5:a5/00:00:15:02:00/40 tag 4 ncq dma 4096 in
                        res 43/40:08:40:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56094.833164] ata25.00: status: { DRDY SENSE ERR }
[56094.837814] ata25.00: error: { UNC }
[56094.940782] ata25.00: configured for UDMA/133
[56094.940794] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56094.940797] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
[56094.940798] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
[56094.940800] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 40 00 00 00 08 00 00
[56094.940801] critical medium error, dev sdc, sector 8953124160 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56094.950988] ata25: EH complete
[56097.106784] ata25.00: exception Emask 0x0 SAct 0x1ff SErr 0x0 action 0x0
[56097.113493] ata25.00: irq_stat 0x40000008
[56097.117535] ata25.00: failed command: READ FPDMA QUEUED
[56097.122766] ata25.00: cmd 60/08:00:48:d5:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
                        res 43/40:08:48:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56097.138641] ata25.00: status: { DRDY SENSE ERR }
[56097.143266] ata25.00: error: { UNC }
[56097.240013] ata25.00: configured for UDMA/133
[56097.240023] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56097.240025] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
[56097.240027] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
[56097.240029] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 48 00 00 00 08 00 00
[56097.240030] critical medium error, dev sdc, sector 8953124168 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56097.250218] ata25: EH complete
[56097.410333] md/raid:md127: read error corrected (8 sectors at 8953089344 on dm-1)
[56097.410336] md/raid:md127: read error corrected (8 sectors at 8953089352 on dm-1)
[56099.619970] ata25.00: exception Emask 0x0 SAct 0xfc400 SErr 0x0 action 0x0
[56099.626852] ata25.00: irq_stat 0x40000008
[56099.630893] ata25.00: failed command: READ FPDMA QUEUED
[56099.636124] ata25.00: cmd 60/08:70:60:d5:a5/00:00:15:02:00/40 tag 14 ncq dma 4096 in
                        res 43/40:08:60:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56099.652076] ata25.00: status: { DRDY SENSE ERR }
[56099.656714] ata25.00: error: { UNC }
[56100.347242] ata25.00: configured for UDMA/133
[56100.347261] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
[56100.347265] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
[56100.347267] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
[56100.347269] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 60 00 00 00 08 00 00
[56100.347271] critical medium error, dev sdc, sector 8953124192 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56100.357447] ata25: EH complete
[56102.646580] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56102.653723] ata25.00: irq_stat 0x40000008
[56102.657799] ata25.00: failed command: READ FPDMA QUEUED
[56102.663032] ata25.00: cmd 60/08:08:68:d5:a5/00:00:15:02:00/40 tag 1 ncq dma 4096 in
                        res 43/40:08:68:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56102.678903] ata25.00: status: { DRDY SENSE ERR }
[56102.683529] ata25.00: error: { UNC }
[56102.771380] ata25.00: configured for UDMA/133
[56102.771406] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
[56102.771410] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
[56102.771412] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
[56102.771414] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 68 00 00 00 08 00 00
[56102.771416] critical medium error, dev sdc, sector 8953124200 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56102.781675] ata25: EH complete
[56103.029621] md/raid:md127: read error corrected (8 sectors at 8953092928 on dm-1)
[56103.092184] md/raid:md127: read error corrected (8 sectors at 8953089376 on dm-1)
[56103.092186] md/raid:md127: read error corrected (8 sectors at 8953089384 on dm-1)
[56105.587003] ata25.00: exception Emask 0x0 SAct 0xbf048f84 SErr 0x0 action 0x0
[56105.594143] ata25.00: irq_stat 0x40000008
[56105.598252] ata25.00: failed command: READ FPDMA QUEUED
[56105.603519] ata25.00: cmd 60/08:38:c0:9d:a5/00:00:15:02:00/40 tag 7 ncq dma 4096 in
                        res 43/40:08:c0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56105.619401] ata25.00: status: { DRDY SENSE ERR }
[56105.624038] ata25.00: error: { UNC }
[56105.720389] ata25.00: configured for UDMA/133
[56105.720406] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56105.720408] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
[56105.720410] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
[56105.720411] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d c0 00 00 00 08 00 00
[56105.720412] critical medium error, dev sdc, sector 8953109952 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56105.730613] ata25: EH complete
[56107.886332] ata25.00: exception Emask 0x0 SAct 0x801f0fe SErr 0x0 action 0x0
[56107.893397] ata25.00: irq_stat 0x40000008
[56107.897450] ata25.00: failed command: READ FPDMA QUEUED
[56107.902682] ata25.00: cmd 60/08:60:d0:9d:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
                        res 43/40:08:d0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56107.918625] ata25.00: status: { DRDY SENSE ERR }
[56107.923257] ata25.00: error: { UNC }
[56108.027884] ata25.00: configured for UDMA/133
[56108.027924] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56108.027928] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
[56108.027930] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
[56108.027932] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d d0 00 00 00 08 00 00
[56108.027933] critical medium error, dev sdc, sector 8953109968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56108.038104] ata25: EH complete
[56109.020217] md/raid:md127: read error corrected (8 sectors at 8953075136 on dm-1)
[56109.070237] md/raid:md127: read error corrected (8 sectors at 8953075152 on dm-1)
[56128.637868] ata25.00: exception Emask 0x0 SAct 0x1f08400 SErr 0x0 action 0x0
[56128.644996] ata25.00: irq_stat 0x40000008
[56128.649038] ata25.00: failed command: READ FPDMA QUEUED
[56128.654277] ata25.00: cmd 60/40:50:08:b5:bf/05:00:15:02:00/40 tag 10 ncq dma 688128 in
                        res 43/40:40:50:b9:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56128.670394] ata25.00: status: { DRDY SENSE ERR }
[56128.675024] ata25.00: error: { UNC }
[56128.778932] ata25.00: configured for UDMA/133
[56128.778969] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56128.778973] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
[56128.778975] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
[56128.778978] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 15 bf b5 08 00 00 05 40 00 00
[56128.778980] critical medium error, dev sdc, sector 8954819848 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56128.789429] ata25: EH complete
[56140.808042] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56140.815183] ata25.00: irq_stat 0x40000008
[56140.819207] ata25.00: failed command: READ FPDMA QUEUED
[56140.824447] ata25.00: cmd 60/c0:58:50:fa:c1/02:00:15:02:00/40 tag 11 ncq dma 360448 in
                        res 43/40:c0:88:fa:c1/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56140.840704] ata25.00: status: { DRDY SENSE ERR }
[56140.845334] ata25.00: error: { UNC }
[56140.933033] ata25.00: configured for UDMA/133
[56140.933158] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56140.933161] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
[56140.933163] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
[56140.933166] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 50 00 00 02 c0 00 00
[56140.933168] critical medium error, dev sdc, sector 8954968656 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
[56140.943561] ata25: EH complete
[56151.197018] ata25.00: exception Emask 0x0 SAct 0x90a00708 SErr 0x0 action 0x0
[56151.204161] ata25.00: irq_stat 0x40000008
[56151.208245] ata25.00: failed command: READ FPDMA QUEUED
[56151.213520] ata25.00: cmd 60/08:b8:50:b9:bf/00:00:15:02:00/40 tag 23 ncq dma 4096 in
                        res 43/40:08:50:b9:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56151.229462] ata25.00: status: { DRDY SENSE ERR }
[56151.234143] ata25.00: error: { UNC }
[56151.346056] ata25.00: configured for UDMA/133
[56151.346077] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56151.346081] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
[56151.346083] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
[56151.346085] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 15 bf b9 50 00 00 00 08 00 00
[56151.346087] critical medium error, dev sdc, sector 8954820944 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56151.356272] ata25: EH complete
[56151.993105] md/raid:md127: read error corrected (8 sectors at 8954786128 on dm-1)
[56160.208102] ata25.00: exception Emask 0x0 SAct 0x6000 SErr 0x0 action 0x0
[56160.214893] ata25.00: irq_stat 0x40000008
[56160.218993] ata25.00: failed command: READ FPDMA QUEUED
[56160.224233] ata25.00: cmd 60/30:68:00:dd:bf/09:00:15:02:00/40 tag 13 ncq dma 1204224 in
                        res 43/40:30:c8:e2:bf/00:09:15:02:00/00 Emask 0x408 (media error) <F>
[56160.240498] ata25.00: status: { DRDY SENSE ERR }
[56160.245133] ata25.00: error: { UNC }
[56160.342920] ata25.00: configured for UDMA/133
[56160.342941] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
[56160.342944] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
[56160.342946] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
[56160.342949] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf dd 00 00 00 09 30 00 00
[56160.342950] critical medium error, dev sdc, sector 8954830080 op 0x0:(READ) flags 0x84700 phys_seg 138 prio class 0
[56160.353550] ata25: EH complete
[56162.898662] ata25.00: exception Emask 0x0 SAct 0xff033fff SErr 0x0 action 0x0
[56162.905798] ata25.00: irq_stat 0x40000008
[56162.909825] ata25.00: failed command: READ FPDMA QUEUED
[56162.915066] ata25.00: cmd 60/08:c0:88:fa:c1/00:00:15:02:00/40 tag 24 ncq dma 4096 in
                        res 43/40:08:88:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56162.931020] ata25.00: status: { DRDY SENSE ERR }
[56162.935724] ata25.00: error: { UNC }
[56163.025343] ata25.00: configured for UDMA/133
[56163.025377] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56163.025380] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
[56163.025381] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
[56163.025383] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 88 00 00 00 08 00 00
[56163.025384] critical medium error, dev sdc, sector 8954968712 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56163.035579] ata25: EH complete
[56165.407934] ata25.00: exception Emask 0x0 SAct 0x3f00301e SErr 0x0 action 0x0
[56165.415071] ata25.00: irq_stat 0x40000008
[56165.419099] ata25.00: failed command: READ FPDMA QUEUED
[56165.424332] ata25.00: cmd 60/08:08:a0:fa:c1/00:00:15:02:00/40 tag 1 ncq dma 4096 in
                        res 43/40:08:a0:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56165.440240] ata25.00: status: { DRDY SENSE ERR }
[56165.444882] ata25.00: error: { UNC }
[56165.541134] ata25.00: configured for UDMA/133
[56165.541174] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56165.541178] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
[56165.541180] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
[56165.541182] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa a0 00 00 00 08 00 00
[56165.541183] critical medium error, dev sdc, sector 8954968736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56165.551397] ata25: EH complete
[56165.908338] md/raid:md127: read error corrected (8 sectors at 8954933896 on dm-1)
[56165.908399] md/raid:md127: read error corrected (8 sectors at 8954933920 on dm-1)
[56168.776163] ata25.00: exception Emask 0x0 SAct 0x3e00 SErr 0x0 action 0x0
[56168.782972] ata25.00: irq_stat 0x40000008
[56168.787088] ata25.00: failed command: READ FPDMA QUEUED
[56168.792331] ata25.00: cmd 60/68:48:00:15:c2/05:00:15:02:00/40 tag 9 ncq dma 708608 in
                        res 43/40:68:38:16:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56168.808360] ata25.00: status: { DRDY SENSE ERR }
[56168.812995] ata25.00: error: { UNC }
[56168.914982] ata25.00: configured for UDMA/133
[56168.915022] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56168.915026] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
[56168.915028] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
[56168.915030] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 c2 15 00 00 00 05 68 00 00
[56168.915032] critical medium error, dev sdc, sector 8954975488 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56168.925513] ata25: EH complete
[56171.028271] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56171.035404] ata25.00: irq_stat 0x40000008
[56171.039425] ata25.00: failed command: READ FPDMA QUEUED
[56171.044663] ata25.00: cmd 60/88:b0:68:22:c2/05:00:15:02:00/40 tag 22 ncq dma 724992 in
                        res 43/40:88:40:24:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56171.060767] ata25.00: status: { DRDY SENSE ERR }
[56171.065396] ata25.00: error: { UNC }
[56171.197504] ata25.00: configured for UDMA/133
[56171.197561] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56171.197563] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
[56171.197565] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
[56171.197567] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 22 68 00 00 05 88 00 00
[56171.197568] critical medium error, dev sdc, sector 8954978920 op 0x0:(READ) flags 0x84700 phys_seg 89 prio class 0
[56171.207924] ata25: EH complete
[56173.646130] ata25.00: exception Emask 0x0 SAct 0x1f830fc SErr 0x0 action 0x0
[56173.653181] ata25.00: irq_stat 0x40000008
[56173.657206] ata25.00: failed command: READ FPDMA QUEUED
[56173.662452] ata25.00: cmd 60/08:98:c8:e2:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
                        res 43/40:08:c8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56173.678392] ata25.00: status: { DRDY SENSE ERR }
[56173.683024] ata25.00: error: { UNC }
[56173.788276] ata25.00: configured for UDMA/133
[56173.788298] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56173.788301] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
[56173.788302] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
[56173.788304] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 c8 00 00 00 08 00 00
[56173.788305] critical medium error, dev sdc, sector 8954831560 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56173.798484] ata25: EH complete
[56176.164900] ata25.00: exception Emask 0x0 SAct 0x7f003c40 SErr 0x0 action 0x0
[56176.172042] ata25.00: irq_stat 0x40000008
[56176.176075] ata25.00: failed command: READ FPDMA QUEUED
[56176.181310] ata25.00: cmd 60/08:c0:d8:e2:bf/00:00:15:02:00/40 tag 24 ncq dma 4096 in
                        res 43/40:08:d8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56176.197248] ata25.00: status: { DRDY SENSE ERR }
[56176.201885] ata25.00: error: { UNC }
[56176.304061] ata25.00: configured for UDMA/133
[56176.304083] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56176.304086] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
[56176.304088] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
[56176.304091] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 d8 00 00 00 08 00 00
[56176.304092] critical medium error, dev sdc, sector 8954831576 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56176.314265] ata25: EH complete
[56178.758401] ata25.00: exception Emask 0x0 SAct 0xf0000 SErr 0x0 action 0x0
[56178.765286] ata25.00: irq_stat 0x40000008
[56178.769308] ata25.00: failed command: READ FPDMA QUEUED
[56178.774543] ata25.00: cmd 60/08:80:f0:e2:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
                        res 43/40:08:f0:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56178.790490] ata25.00: status: { DRDY SENSE ERR }
[56178.795122] ata25.00: error: { UNC }
[56178.903147] ata25.00: configured for UDMA/133
[56178.903159] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
[56178.903162] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
[56178.903164] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
[56178.903166] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 f0 00 00 00 08 00 00
[56178.903168] critical medium error, dev sdc, sector 8954831600 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56178.913335] ata25: EH complete
[56179.970979] md/raid:md127: read error corrected (8 sectors at 8954796744 on dm-1)
[56179.995987] md/raid:md127: read error corrected (8 sectors at 8954796760 on dm-1)
[56180.127216] md/raid:md127: read error corrected (8 sectors at 8954796784 on dm-1)
[56182.417776] ata25.00: exception Emask 0x0 SAct 0x82a33a10 SErr 0x0 action 0x0
[56182.424916] ata25.00: irq_stat 0x40000008
[56182.428940] ata25.00: failed command: READ FPDMA QUEUED
[56182.434181] ata25.00: cmd 60/00:68:00:f0:bf/05:00:15:02:00/40 tag 13 ncq dma 655360 in
                        res 43/40:00:a8:f0:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56182.450294] ata25.00: status: { DRDY SENSE ERR }
[56182.454927] ata25.00: error: { UNC }
[56182.860118] ata25.00: configured for UDMA/133
[56182.860160] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56182.860162] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
[56182.860164] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
[56182.860166] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 00 00 00 05 00 00 00
[56182.860167] critical medium error, dev sdc, sector 8954834944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
[56182.870623] ata25: EH complete
[56187.499909] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56187.507047] ata25.00: irq_stat 0x40000008
[56187.511096] ata25.00: failed command: READ FPDMA QUEUED
[56187.516328] ata25.00: cmd 60/c0:38:40:0a:c0/02:00:15:02:00/40 tag 7 ncq dma 360448 in
                        res 43/40:c0:90:0c:c0/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56187.532352] ata25.00: status: { DRDY SENSE ERR }
[56187.536977] ata25.00: error: { UNC }
[56187.683438] ata25.00: configured for UDMA/133
[56187.683565] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
[56187.683568] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
[56187.683571] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
[56187.683573] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 c0 0a 40 00 00 02 c0 00 00
[56187.683574] critical medium error, dev sdc, sector 8954841664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
[56187.693985] ata25: EH complete
[56190.600849] ata25.00: exception Emask 0x0 SAct 0x3c00000 SErr 0x0 action 0x0
[56190.607902] ata25.00: irq_stat 0x40000008
[56190.611931] ata25.00: failed command: READ FPDMA QUEUED
[56190.617167] ata25.00: cmd 60/08:b0:40:24:c2/00:00:15:02:00/40 tag 22 ncq dma 4096 in
                        res 43/40:08:40:24:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56190.633103] ata25.00: status: { DRDY SENSE ERR }
[56190.637732] ata25.00: error: { UNC }
[56190.749035] ata25.00: configured for UDMA/133
[56190.749045] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56190.749047] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
[56190.749049] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
[56190.749051] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 24 40 00 00 00 08 00 00
[56190.749052] critical medium error, dev sdc, sector 8954979392 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56190.759221] ata25: EH complete
[56192.116233] md/raid:md127: read error corrected (8 sectors at 8954944576 on dm-1)
[56194.231404] ata25.00: exception Emask 0x0 SAct 0xffe00083 SErr 0x0 action 0x0
[56194.238545] ata25.00: irq_stat 0x40000008
[56194.242576] ata25.00: failed command: READ FPDMA QUEUED
[56194.247816] ata25.00: cmd 60/08:a8:90:0c:c0/00:00:15:02:00/40 tag 21 ncq dma 4096 in
                        res 43/40:08:90:0c:c0/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56194.263754] ata25.00: status: { DRDY SENSE ERR }
[56194.268384] ata25.00: error: { UNC }
[56194.372726] ata25.00: configured for UDMA/133
[56194.372748] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56194.372752] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
[56194.372754] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
[56194.372756] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 15 c0 0c 90 00 00 00 08 00 00
[56194.372758] critical medium error, dev sdc, sector 8954842256 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56194.382955] ata25: EH complete
[56196.397118] ata25.00: exception Emask 0x0 SAct 0xffcf3fff SErr 0x0 action 0x0
[56196.404260] ata25.00: irq_stat 0x40000008
[56196.408299] ata25.00: failed command: READ FPDMA QUEUED
[56196.413528] ata25.00: cmd 60/08:80:a8:f0:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
                        res 43/40:08:a8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56196.429462] ata25.00: status: { DRDY SENSE ERR }
[56196.434090] ata25.00: error: { UNC }
[56196.530294] ata25.00: configured for UDMA/133
[56196.530341] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56196.530345] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
[56196.530347] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
[56196.530349] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 a8 00 00 00 08 00 00
[56196.530351] critical medium error, dev sdc, sector 8954835112 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56196.540554] ata25: EH complete
[56198.681800] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56198.688940] ata25.00: irq_stat 0x40000008
[56198.692969] ata25.00: failed command: READ FPDMA QUEUED
[56198.698203] ata25.00: cmd 60/08:48:b0:f0:bf/00:00:15:02:00/40 tag 9 ncq dma 4096 in
                        res 43/40:08:b0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56198.714053] ata25.00: status: { DRDY SENSE ERR }
[56198.718683] ata25.00: error: { UNC }
[56198.804495] ata25.00: configured for UDMA/133
[56198.804544] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56198.804547] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
[56198.804548] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
[56198.804550] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b0 00 00 00 08 00 00
[56198.804551] critical medium error, dev sdc, sector 8954835120 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56198.814780] ata25: EH complete
[56201.562060] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56201.569195] ata25.00: irq_stat 0x40000008
[56201.573222] ata25.00: failed command: READ FPDMA QUEUED
[56201.578453] ata25.00: cmd 60/08:98:b8:f0:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
                        res 43/40:08:b8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56201.594397] ata25.00: status: { DRDY SENSE ERR }
[56201.599028] ata25.00: error: { UNC }
[56201.686862] ata25.00: configured for UDMA/133
[56201.686971] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
[56201.686975] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
[56201.686977] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
[56201.686980] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b8 00 00 00 08 00 00
[56201.686981] critical medium error, dev sdc, sector 8954835128 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56201.697185] ata25: EH complete
[56204.153189] ata25.00: exception Emask 0x0 SAct 0x7eb9f001 SErr 0x0 action 0x0
[56204.160325] ata25.00: irq_stat 0x40000008
[56204.164349] ata25.00: failed command: READ FPDMA QUEUED
[56204.169575] ata25.00: cmd 60/08:70:d0:f0:bf/00:00:15:02:00/40 tag 14 ncq dma 4096 in
                        res 43/40:08:d0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56204.185512] ata25.00: status: { DRDY SENSE ERR }
[56204.190140] ata25.00: error: { UNC }
[56204.285936] ata25.00: configured for UDMA/133
[56204.285962] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
[56204.285966] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
[56204.285968] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
[56204.285969] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 d0 00 00 00 08 00 00
[56204.285970] critical medium error, dev sdc, sector 8954835152 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56204.296157] ata25: EH complete
[56204.675513] md/raid:md127: read error corrected (8 sectors at 8954800296 on dm-1)
[56204.675539] md/raid:md127: read error corrected (8 sectors at 8954807440 on dm-1)
[56205.744062] md/raid:md127: read error corrected (8 sectors at 8954800304 on dm-1)
[56205.744063] md/raid:md127: read error corrected (8 sectors at 8954800312 on dm-1)
[56205.762220] md/raid:md127: read error corrected (8 sectors at 8954800336 on dm-1)
[56208.131813] ata25.00: exception Emask 0x0 SAct 0x3ff87c SErr 0x0 action 0x0
[56208.138777] ata25.00: irq_stat 0x40000008
[56208.142810] ata25.00: failed command: READ FPDMA QUEUED
[56208.148045] ata25.00: cmd 60/08:58:38:16:c2/00:00:15:02:00/40 tag 11 ncq dma 4096 in
                        res 43/40:08:38:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56208.163988] ata25.00: status: { DRDY SENSE ERR }
[56208.168617] ata25.00: error: { UNC }
[56208.251256] ata25.00: configured for UDMA/133
[56208.251278] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56208.251281] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
[56208.251283] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
[56208.251286] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 38 00 00 00 08 00 00
[56208.251287] critical medium error, dev sdc, sector 8954975800 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56208.261478] ata25: EH complete
[56210.458900] ata25.00: exception Emask 0x0 SAct 0x57c0154b SErr 0x0 action 0x0
[56210.466042] ata25.00: irq_stat 0x40000008
[56210.470074] ata25.00: failed command: READ FPDMA QUEUED
[56210.475307] ata25.00: cmd 60/08:e0:48:16:c2/00:00:15:02:00/40 tag 28 ncq dma 4096 in
                        res 43/40:08:48:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56210.491246] ata25.00: status: { DRDY SENSE ERR }
[56210.495877] ata25.00: error: { UNC }
[56210.592096] ata25.00: configured for UDMA/133
[56210.592116] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56210.592118] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
[56210.592120] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
[56210.592122] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 48 00 00 00 08 00 00
[56210.592123] critical medium error, dev sdc, sector 8954975816 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56210.602295] ata25: EH complete
[56212.738574] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56212.745711] ata25.00: irq_stat 0x40000008
[56212.749733] ata25.00: failed command: READ FPDMA QUEUED
[56212.754964] ata25.00: cmd 60/08:68:60:16:c2/00:00:15:02:00/40 tag 13 ncq dma 4096 in
                        res 43/40:08:60:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56212.770902] ata25.00: status: { DRDY SENSE ERR }
[56212.775529] ata25.00: error: { UNC }
[56212.857966] ata25.00: configured for UDMA/133
[56212.858023] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
[56212.858026] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
[56212.858028] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
[56212.858031] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 60 00 00 00 08 00 00
[56212.858032] critical medium error, dev sdc, sector 8954975840 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56212.868220] ata25: EH complete
[56222.496422] ata25.00: exception Emask 0x0 SAct 0x60000000 SErr 0x0 action 0x0
[56222.503555] ata25.00: irq_stat 0x40000008
[56222.507581] ata25.00: failed command: READ FPDMA QUEUED
[56222.512806] ata25.00: cmd 60/00:e8:00:25:c0/08:00:15:02:00/40 tag 29 ncq dma 1048576 in
                        res 43/40:00:50:28:c0/00:08:15:02:00/00 Emask 0x408 (media error) <F>
[56222.529006] ata25.00: status: { DRDY SENSE ERR }
[56222.533632] ata25.00: error: { UNC }
[56222.671224] ata25.00: configured for UDMA/133
[56222.671234] sd 24:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
[56222.671237] sd 24:0:0:0: [sdc] tag#29 Sense Key : Aborted Command [current] 
[56222.671239] sd 24:0:0:0: [sdc] tag#29 <<vendor>>ASC=0x80 ASCQ=0x2 
[56222.671241] sd 24:0:0:0: [sdc] tag#29 CDB: Read(16) 88 00 00 00 00 02 15 c0 25 00 00 00 08 00 00 00
[56222.671242] I/O error, dev sdc, sector 8954848512 op 0x0:(READ) flags 0x84700 phys_seg 30 prio class 0
[56222.680538] ata25: EH complete
[56222.688487] md/raid:md127: read error corrected (8 sectors at 8954940984 on dm-1)
[56222.688713] md/raid:md127: read error corrected (8 sectors at 8954941000 on dm-1)
[56222.694091] md/raid:md127: read error corrected (8 sectors at 8954941024 on dm-1)
[56268.461816] ata25.00: exception Emask 0x0 SAct 0x3c015f SErr 0x0 action 0x0
[56268.468783] ata25.00: irq_stat 0x40000008
[56268.472807] ata25.00: failed command: READ FPDMA QUEUED
[56268.478039] ata25.00: cmd 60/68:00:98:08:f9/07:00:15:02:00/40 tag 0 ncq dma 970752 in
                        res 43/40:68:f0:0c:f9/00:07:15:02:00/00 Emask 0x408 (media error) <F>
[56268.494065] ata25.00: status: { DRDY SENSE ERR }
[56268.498696] ata25.00: error: { UNC }
[56268.613464] ata25.00: configured for UDMA/133
[56268.613500] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
[56268.613503] sd 24:0:0:0: [sdc] tag#0 Sense Key : Aborted Command [current] 
[56268.613505] sd 24:0:0:0: [sdc] tag#0 <<vendor>>ASC=0x80 ASCQ=0x2 
[56268.613507] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 f9 08 98 00 00 07 68 00 00
[56268.613508] I/O error, dev sdc, sector 8958576792 op 0x0:(READ) flags 0x80700 phys_seg 112 prio class 0
[56268.622926] ata25: EH complete
[56274.436128] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56274.443268] ata25.00: irq_stat 0x40000008
[56274.447298] ata25.00: failed command: READ FPDMA QUEUED
[56274.452542] ata25.00: cmd 60/c0:f8:58:82:fb/02:00:15:02:00/40 tag 31 ncq dma 360448 in
                        res 43/40:c0:c0:84:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56274.468653] ata25.00: status: { DRDY SENSE ERR }
[56274.473431] ata25.00: error: { UNC }
[56274.569728] ata25.00: configured for UDMA/133
[56274.569846] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
[56274.569849] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
[56274.569851] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
[56274.569853] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 15 fb 82 58 00 00 02 c0 00 00
[56274.569854] critical medium error, dev sdc, sector 8958739032 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
[56274.580225] ata25: EH complete
[56277.816375] ata25.00: exception Emask 0x0 SAct 0x7c00003f SErr 0x0 action 0x0
[56277.823514] ata25.00: irq_stat 0x40000008
[56277.827538] ata25.00: failed command: READ FPDMA QUEUED
[56277.832776] ata25.00: cmd 60/08:d0:c0:84:fb/00:00:15:02:00/40 tag 26 ncq dma 4096 in
                        res 43/40:08:c0:84:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56277.848709] ata25.00: status: { DRDY SENSE ERR }
[56277.853339] ata25.00: error: { UNC }
[56277.943562] ata25.00: configured for UDMA/133
[56277.943583] sd 24:0:0:0: [sdc] tag#26 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56277.943586] sd 24:0:0:0: [sdc] tag#26 Sense Key : Medium Error [current] 
[56277.943588] sd 24:0:0:0: [sdc] tag#26 Add. Sense: Unrecovered read error
[56277.943590] sd 24:0:0:0: [sdc] tag#26 CDB: Read(16) 88 00 00 00 00 02 15 fb 84 c0 00 00 00 08 00 00
[56277.943591] critical medium error, dev sdc, sector 8958739648 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56277.953781] ata25: EH complete
[56278.035244] md/raid:md127: read error corrected (8 sectors at 8958704832 on dm-1)
[56282.414840] ata25.00: exception Emask 0x0 SAct 0x1f00 SErr 0x0 action 0x0
[56282.421632] ata25.00: irq_stat 0x40000008
[56282.425664] ata25.00: failed command: READ FPDMA QUEUED
[56282.430892] ata25.00: cmd 60/40:40:00:9d:fb/05:00:15:02:00/40 tag 8 ncq dma 688128 in
                        res 43/40:40:70:a0:fb/00:05:15:02:00/00 Emask 0x408 (media error) <F>
[56282.446907] ata25.00: status: { DRDY SENSE ERR }
[56282.451530] ata25.00: error: { UNC }
[56282.550283] ata25.00: configured for UDMA/133
[56282.550306] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56282.550309] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
[56282.550311] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Unrecovered read error
[56282.550313] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 15 fb 9d 00 00 00 05 40 00 00
[56282.550314] critical medium error, dev sdc, sector 8958745856 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56282.560764] ata25: EH complete
[56284.651765] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56284.658903] ata25.00: irq_stat 0x40000008
[56284.662929] ata25.00: failed command: READ FPDMA QUEUED
[56284.668166] ata25.00: cmd 60/f8:68:08:ad:fb/02:00:15:02:00/40 tag 13 ncq dma 389120 in
                        res 43/40:f8:50:ae:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
[56284.684304] ata25.00: status: { DRDY SENSE ERR }
[56284.688940] ata25.00: error: { UNC }
[56284.782803] ata25.00: configured for UDMA/133
[56284.782879] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56284.782882] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
[56284.782884] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
[56284.782886] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 fb ad 08 00 00 02 f8 00 00
[56284.782887] critical medium error, dev sdc, sector 8958749960 op 0x0:(READ) flags 0x80700 phys_seg 94 prio class 0
[56284.793258] ata25: EH complete
[56291.737028] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56291.744159] ata25.00: irq_stat 0x40000008
[56291.748188] ata25.00: failed command: READ FPDMA QUEUED
[56291.753427] ata25.00: cmd 60/08:c8:50:ae:fb/00:00:15:02:00/40 tag 25 ncq dma 4096 in
                        res 43/40:08:50:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56291.769386] ata25.00: status: { DRDY SENSE ERR }
[56291.774088] ata25.00: error: { UNC }
[56291.880394] ata25.00: configured for UDMA/133
[56291.880466] sd 24:0:0:0: [sdc] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56291.880470] sd 24:0:0:0: [sdc] tag#25 Sense Key : Medium Error [current] 
[56291.880472] sd 24:0:0:0: [sdc] tag#25 Add. Sense: Unrecovered read error
[56291.880475] sd 24:0:0:0: [sdc] tag#25 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 50 00 00 00 08 00 00
[56291.880476] critical medium error, dev sdc, sector 8958750288 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56291.890675] ata25: EH complete
[56294.678468] ata25.00: exception Emask 0x0 SAct 0x3800800 SErr 0x0 action 0x0
[56294.685525] ata25.00: irq_stat 0x40000008
[56294.689551] ata25.00: failed command: READ FPDMA QUEUED
[56294.694778] ata25.00: cmd 60/08:c0:60:ae:fb/00:00:15:02:00/40 tag 24 ncq dma 4096 in
                        res 43/40:08:60:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56294.710714] ata25.00: status: { DRDY SENSE ERR }
[56294.715342] ata25.00: error: { UNC }
[56294.804414] ata25.00: configured for UDMA/133
[56294.804432] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
[56294.804436] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
[56294.804438] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
[56294.804440] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 60 00 00 00 08 00 00
[56294.804442] critical medium error, dev sdc, sector 8958750304 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56294.814622] ata25: EH complete
[56294.823246] md/raid:md127: read error corrected (8 sectors at 8958715472 on dm-1)
[56295.533704] md/raid:md127: read error corrected (8 sectors at 8958715488 on dm-1)
[56297.739116] ata25.00: exception Emask 0x0 SAct 0x811807fe SErr 0x0 action 0x0
[56297.746250] ata25.00: irq_stat 0x40000008
[56297.750274] ata25.00: failed command: READ FPDMA QUEUED
[56297.755507] ata25.00: cmd 60/08:08:70:a0:fb/00:00:15:02:00/40 tag 1 ncq dma 4096 in
                        res 43/40:08:70:a0:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
[56297.771358] ata25.00: status: { DRDY SENSE ERR }
[56297.775986] ata25.00: error: { UNC }
[56297.861641] ata25.00: configured for UDMA/133
[56297.861658] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56297.861661] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
[56297.861664] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
[56297.861666] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 fb a0 70 00 00 00 08 00 00
[56297.861668] critical medium error, dev sdc, sector 8958746736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56297.871852] ata25: EH complete
[56298.209332] md/raid:md127: read error corrected (8 sectors at 8958711920 on dm-1)
[56305.006612] ata25.00: exception Emask 0x0 SAct 0x3f28f00 SErr 0x0 action 0x0
[56305.013664] ata25.00: irq_stat 0x40000008
[56305.017691] ata25.00: failed command: READ FPDMA QUEUED
[56305.022937] ata25.00: cmd 60/00:78:00:90:15/05:00:16:02:00/40 tag 15 ncq dma 655360 in
                        res 43/40:00:40:93:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
[56305.039048] ata25.00: status: { DRDY SENSE ERR }
[56305.043677] ata25.00: error: { UNC }
[56305.168764] ata25.00: configured for UDMA/133
[56305.168820] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56305.168824] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
[56305.168826] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
[56305.168828] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 15 90 00 00 00 05 00 00 00
[56305.168830] critical medium error, dev sdc, sector 8960446464 op 0x0:(READ) flags 0x80700 phys_seg 159 prio class 0
[56305.179276] ata25: EH complete
[56307.266729] ata25.00: exception Emask 0x0 SAct 0xec SErr 0x0 action 0x0
[56307.273348] ata25.00: irq_stat 0x40000008
[56307.277379] ata25.00: failed command: READ FPDMA QUEUED
[56307.282614] ata25.00: cmd 60/40:10:00:9d:15/05:00:16:02:00/40 tag 2 ncq dma 688128 in
                        res 43/40:40:20:a1:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
[56307.298638] ata25.00: status: { DRDY SENSE ERR }
[56307.303270] ata25.00: error: { UNC }
[56307.399985] ata25.00: configured for UDMA/133
[56307.400014] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56307.400017] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
[56307.400019] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
[56307.400021] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 16 15 9d 00 00 00 05 40 00 00
[56307.400023] critical medium error, dev sdc, sector 8960449792 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56307.410470] ata25: EH complete
[56310.075585] ata25.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
[56310.082119] ata25.00: irq_stat 0x40000008
[56310.086146] ata25.00: failed command: READ FPDMA QUEUED
[56310.091383] ata25.00: cmd 60/08:00:40:93:15/00:00:16:02:00/40 tag 0 ncq dma 4096 in
                        res 43/40:08:40:93:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56310.107240] ata25.00: status: { DRDY SENSE ERR }
[56310.111869] ata25.00: error: { UNC }
[56310.249012] ata25.00: configured for UDMA/133
[56310.249025] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56310.249028] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
[56310.249031] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
[56310.249033] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 15 93 40 00 00 00 08 00 00
[56310.249035] critical medium error, dev sdc, sector 8960447296 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56310.259210] ata25: EH complete
[56312.894324] md/raid:md127: read error corrected (8 sectors at 8960412480 on dm-1)
[56315.319228] ata25.00: exception Emask 0x0 SAct 0xe0000 SErr 0x0 action 0x0
[56315.326108] ata25.00: irq_stat 0x40000008
[56315.330136] ata25.00: failed command: READ FPDMA QUEUED
[56315.335376] ata25.00: cmd 60/08:88:20:a1:15/00:00:16:02:00/40 tag 17 ncq dma 4096 in
                        res 43/40:08:20:a1:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56315.351313] ata25.00: status: { DRDY SENSE ERR }
[56315.355946] ata25.00: error: { UNC }
[56315.488843] ata25.00: configured for UDMA/133
[56315.488853] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56315.488856] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
[56315.488857] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
[56315.488859] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 15 a1 20 00 00 00 08 00 00
[56315.488861] critical medium error, dev sdc, sector 8960450848 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56315.499030] ata25: EH complete
[56316.119276] md/raid:md127: read error corrected (8 sectors at 8960416032 on dm-1)
[56324.775934] ata25.00: exception Emask 0x0 SAct 0x7000000f SErr 0x0 action 0x0
[56324.783070] ata25.00: irq_stat 0x40000008
[56324.787095] ata25.00: failed command: READ FPDMA QUEUED
[56324.792337] ata25.00: cmd 60/c0:e0:68:ca:34/02:00:16:02:00/40 tag 28 ncq dma 360448 in
                        res 43/40:c0:88:cb:34/00:02:16:02:00/00 Emask 0x408 (media error) <F>
[56324.808453] ata25.00: status: { DRDY SENSE ERR }
[56324.813084] ata25.00: error: { UNC }
[56324.935531] ata25.00: configured for UDMA/133
[56324.935573] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
[56324.935577] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
[56324.935579] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
[56324.935582] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 16 34 ca 68 00 00 02 c0 00 00
[56324.935583] critical medium error, dev sdc, sector 8962493032 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
[56324.945933] ata25: EH complete
[56327.568023] ata25.00: exception Emask 0x0 SAct 0x7f8001ff SErr 0x0 action 0x0
[56327.575156] ata25.00: irq_stat 0x40000008
[56327.579185] ata25.00: failed command: READ FPDMA QUEUED
[56327.584415] ata25.00: cmd 60/08:b8:88:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
                        res 43/40:08:88:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56327.600346] ata25.00: status: { DRDY SENSE ERR }
[56327.604973] ata25.00: error: { UNC }
[56327.692932] ata25.00: configured for UDMA/133
[56327.692968] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56327.692971] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
[56327.692973] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
[56327.692976] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 88 00 00 00 08 00 00
[56327.692978] critical medium error, dev sdc, sector 8962493320 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56327.703153] ata25: EH complete
[56330.009432] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56330.016578] ata25.00: irq_stat 0x40000008
[56330.020604] ata25.00: failed command: READ FPDMA QUEUED
[56330.025845] ata25.00: cmd 60/08:b8:90:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
                        res 43/40:08:90:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56330.041782] ata25.00: status: { DRDY SENSE ERR }
[56330.046407] ata25.00: error: { UNC }
[56330.142059] ata25.00: configured for UDMA/133
[56330.142130] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56330.142133] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
[56330.142135] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
[56330.142136] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 90 00 00 00 08 00 00
[56330.142138] critical medium error, dev sdc, sector 8962493328 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56330.152328] ata25: EH complete
[56330.473245] md/raid:md127: read error corrected (8 sectors at 8962458504 on dm-1)
[56330.475305] md/raid:md127: read error corrected (8 sectors at 8962458512 on dm-1)
[56381.913875] ata25.00: exception Emask 0x0 SAct 0x8 SErr 0x0 action 0x0
[56381.920405] ata25.00: irq_stat 0x40000008
[56381.924429] ata25.00: failed command: READ FPDMA QUEUED
[56381.929664] ata25.00: cmd 60/00:18:00:d0:6b/05:00:16:02:00/40 tag 3 ncq dma 655360 in
                        res 43/40:00:18:d0:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
[56381.945685] ata25.00: status: { DRDY SENSE ERR }
[56381.950313] ata25.00: error: { UNC }
[56382.040583] ata25.00: configured for UDMA/133
[56382.040598] sd 24:0:0:0: [sdc] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56382.040601] sd 24:0:0:0: [sdc] tag#3 Sense Key : Medium Error [current] 
[56382.040603] sd 24:0:0:0: [sdc] tag#3 Add. Sense: Unrecovered read error
[56382.040605] sd 24:0:0:0: [sdc] tag#3 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 00 00 00 05 00 00 00
[56382.040606] critical medium error, dev sdc, sector 8966098944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
[56382.051045] ata25: EH complete
[56384.109723] ata25.00: exception Emask 0x0 SAct 0x7fe00000 SErr 0x0 action 0x0
[56384.116861] ata25.00: irq_stat 0x40000008
[56384.120885] ata25.00: failed command: READ FPDMA QUEUED
[56384.126117] ata25.00: cmd 60/40:a8:00:dd:6b/05:00:16:02:00/40 tag 21 ncq dma 688128 in
                        res 43/40:40:f8:dd:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
[56384.142231] ata25.00: status: { DRDY SENSE ERR }
[56384.146862] ata25.00: error: { UNC }
[56384.264767] ata25.00: configured for UDMA/133
[56384.264806] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56384.264809] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
[56384.264810] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
[56384.264812] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 16 6b dd 00 00 00 05 40 00 00
[56384.264813] critical medium error, dev sdc, sector 8966102272 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[56384.275275] ata25: EH complete
[56386.346397] ata25.00: exception Emask 0x0 SAct 0x7f7d0 SErr 0x0 action 0x0
[56386.353275] ata25.00: irq_stat 0x40000008
[56386.357302] ata25.00: failed command: READ FPDMA QUEUED
[56386.362537] ata25.00: cmd 60/c0:30:40:ea:6b/02:00:16:02:00/40 tag 6 ncq dma 360448 in
                        res 43/40:c0:d8:eb:6b/00:02:16:02:00/00 Emask 0x408 (media error) <F>
[56386.378556] ata25.00: status: { DRDY SENSE ERR }
[56386.383185] ata25.00: error: { UNC }
[56386.472369] ata25.00: configured for UDMA/133
[56386.472430] sd 24:0:0:0: [sdc] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56386.472433] sd 24:0:0:0: [sdc] tag#6 Sense Key : Medium Error [current] 
[56386.472435] sd 24:0:0:0: [sdc] tag#6 Add. Sense: Unrecovered read error
[56386.472438] sd 24:0:0:0: [sdc] tag#6 CDB: Read(16) 88 00 00 00 00 02 16 6b ea 40 00 00 02 c0 00 00
[56386.472439] critical medium error, dev sdc, sector 8966105664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
[56386.482798] ata25: EH complete
[56388.841034] ata25.00: exception Emask 0x0 SAct 0x80000041 SErr 0x0 action 0x0
[56388.848169] ata25.00: irq_stat 0x40000008
[56388.852195] ata25.00: failed command: READ FPDMA QUEUED
[56388.857423] ata25.00: cmd 60/08:f8:18:d0:6b/00:00:16:02:00/40 tag 31 ncq dma 4096 in
                        res 43/40:08:18:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56388.873364] ata25.00: status: { DRDY SENSE ERR }
[56388.877996] ata25.00: error: { UNC }
[56388.988119] ata25.00: configured for UDMA/133
[56388.988136] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56388.988139] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
[56388.988141] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
[56388.988142] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 18 00 00 00 08 00 00
[56388.988143] critical medium error, dev sdc, sector 8966098968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56388.998324] ata25: EH complete
[56391.522804] ata25.00: exception Emask 0x0 SAct 0x1e9c179c SErr 0x0 action 0x0
[56391.529945] ata25.00: irq_stat 0x40000008
[56391.533976] ata25.00: failed command: READ FPDMA QUEUED
[56391.539212] ata25.00: cmd 60/08:d8:20:d0:6b/00:00:16:02:00/40 tag 27 ncq dma 4096 in
                        res 43/40:08:20:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56391.555146] ata25.00: status: { DRDY SENSE ERR }
[56391.559778] ata25.00: error: { UNC }
[56391.695556] ata25.00: configured for UDMA/133
[56391.695634] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56391.695637] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
[56391.695639] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
[56391.695642] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 20 00 00 00 08 00 00
[56391.695643] critical medium error, dev sdc, sector 8966098976 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56391.705828] ata25: EH complete
[56393.848207] ata25.00: exception Emask 0x0 SAct 0xf9fff8bf SErr 0x0 action 0x0
[56393.855346] ata25.00: irq_stat 0x40000008
[56393.859386] ata25.00: failed command: READ FPDMA QUEUED
[56393.864624] ata25.00: cmd 60/c8:d8:48:3a:6e/02:00:16:02:00/40 tag 27 ncq dma 364544 in
                        res 43/40:c8:08:3b:6e/00:02:16:02:00/00 Emask 0x408 (media error) <F>
[56393.880732] ata25.00: status: { DRDY SENSE ERR }
[56393.885357] ata25.00: error: { UNC }
[56393.986399] ata25.00: configured for UDMA/133
[56393.986540] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56393.986544] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
[56393.986546] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
[56393.986549] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6e 3a 48 00 00 02 c8 00 00
[56393.986550] critical medium error, dev sdc, sector 8966257224 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
[56393.996900] ata25: EH complete
[56396.083820] md/raid:md127: read error corrected (8 sectors at 8966064160 on dm-1)
[56396.083837] md/raid:md127: read error corrected (8 sectors at 8966064152 on dm-1)
[56400.565346] ata25.00: exception Emask 0x0 SAct 0xfdfbf7df SErr 0x0 action 0x0
[56400.572485] ata25.00: irq_stat 0x40000008
[56400.576527] ata25.00: failed command: READ FPDMA QUEUED
[56400.581758] ata25.00: cmd 60/08:60:08:3b:6e/00:00:16:02:00/40 tag 12 ncq dma 4096 in
                        res 43/40:08:08:3b:6e/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56400.597695] ata25.00: status: { DRDY SENSE ERR }
[56400.602329] ata25.00: error: { UNC }
[56400.734083] ata25.00: configured for UDMA/133
[56400.734136] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56400.734139] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
[56400.734142] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
[56400.734144] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 16 6e 3b 08 00 00 00 08 00 00
[56400.734146] critical medium error, dev sdc, sector 8966257416 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56400.744333] ata25: EH complete
[56401.189218] md/raid:md127: read error corrected (8 sectors at 8966222600 on dm-1)
[56403.685654] ata25.00: exception Emask 0x0 SAct 0x701ffff0 SErr 0x0 action 0x0
[56403.692794] ata25.00: irq_stat 0x40000008
[56403.696816] ata25.00: failed command: READ FPDMA QUEUED
[56403.702049] ata25.00: cmd 60/08:20:d8:eb:6b/00:00:16:02:00/40 tag 4 ncq dma 4096 in
                        res 43/40:08:d8:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56403.717891] ata25.00: status: { DRDY SENSE ERR }
[56403.722515] ata25.00: error: { UNC }
[56403.833001] ata25.00: configured for UDMA/133
[56403.833024] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56403.833029] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
[56403.833032] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
[56403.833034] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 16 6b eb d8 00 00 00 08 00 00
[56403.833036] critical medium error, dev sdc, sector 8966106072 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56403.843247] ata25: EH complete
[56406.373840] ata25.00: exception Emask 0x0 SAct 0x7f81fe6b SErr 0x0 action 0x0
[56406.380972] ata25.00: irq_stat 0x40000008
[56406.385004] ata25.00: failed command: READ FPDMA QUEUED
[56406.390247] ata25.00: cmd 60/08:48:e0:eb:6b/00:00:16:02:00/40 tag 9 ncq dma 4096 in
                        res 43/40:08:e0:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56406.406107] ata25.00: status: { DRDY SENSE ERR }
[56406.410737] ata25.00: error: { UNC }
[56406.507071] ata25.00: configured for UDMA/133
[56406.507094] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56406.507096] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
[56406.507098] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
[56406.507100] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6b eb e0 00 00 00 08 00 00
[56406.507101] critical medium error, dev sdc, sector 8966106080 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56406.517282] ata25: EH complete
[56408.531429] ata25.00: exception Emask 0x0 SAct 0xbfff87ff SErr 0x0 action 0x0
[56408.538690] ata25.00: irq_stat 0x40000008
[56408.542722] ata25.00: failed command: READ FPDMA QUEUED
[56408.547952] ata25.00: cmd 60/08:78:f8:dd:6b/00:00:16:02:00/40 tag 15 ncq dma 4096 in
                        res 43/40:08:f8:dd:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56408.563920] ata25.00: status: { DRDY SENSE ERR }
[56408.568559] ata25.00: error: { UNC }
[56408.664647] ata25.00: configured for UDMA/133
[56408.664684] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56408.664687] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
[56408.664689] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
[56408.664690] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 6b dd f8 00 00 00 08 00 00
[56408.664692] critical medium error, dev sdc, sector 8966102520 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56408.674879] ata25: EH complete
[56410.899004] md/raid:md127: read error corrected (8 sectors at 8966071256 on dm-1)
[56410.899015] md/raid:md127: read error corrected (8 sectors at 8966071264 on dm-1)
[56412.014769] md/raid:md127: read error corrected (8 sectors at 8966067704 on dm-1)
[56419.657721] ata25.00: exception Emask 0x0 SAct 0x1eff1f SErr 0x0 action 0x0
[56419.664684] ata25.00: irq_stat 0x40000008
[56419.668729] ata25.00: failed command: READ FPDMA QUEUED
[56419.673959] ata25.00: cmd 60/00:88:00:15:6c/08:00:16:02:00/40 tag 17 ncq dma 1048576 in
                        res 43/40:00:78:15:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
[56419.690151] ata25.00: status: { DRDY SENSE ERR }
[56419.694787] ata25.00: error: { UNC }
[56419.827430] ata25.00: configured for UDMA/133
[56419.827504] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56419.827507] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
[56419.827509] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
[56419.827510] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 00 00 00 08 00 00 00
[56419.827511] critical medium error, dev sdc, sector 8966116608 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
[56419.837858] ata25: EH complete
[56421.908274] ata25.00: exception Emask 0x0 SAct 0x70e1847f SErr 0x0 action 0x0
[56421.915411] ata25.00: irq_stat 0x40000008
[56421.919444] ata25.00: failed command: READ FPDMA QUEUED
[56421.924674] ata25.00: cmd 60/00:00:00:1d:6c/08:00:16:02:00/40 tag 0 ncq dma 1048576 in
                        res 43/40:00:50:23:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
[56421.940790] ata25.00: status: { DRDY SENSE ERR }
[56421.945421] ata25.00: error: { UNC }
[56422.051628] ata25.00: configured for UDMA/133
[56422.051706] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56422.051710] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
[56422.051712] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
[56422.051714] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 6c 1d 00 00 00 08 00 00 00
[56422.051716] critical medium error, dev sdc, sector 8966118656 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
[56422.062114] ata25: EH complete
[56424.259349] ata25.00: exception Emask 0x0 SAct 0x6b1ffc08 SErr 0x0 action 0x0
[56424.266486] ata25.00: irq_stat 0x40000008
[56424.270517] ata25.00: failed command: READ FPDMA QUEUED
[56424.275749] ata25.00: cmd 60/08:50:78:15:6c/00:00:16:02:00/40 tag 10 ncq dma 4096 in
                        res 43/40:08:78:15:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56424.291690] ata25.00: status: { DRDY SENSE ERR }
[56424.296324] ata25.00: error: { UNC }
[56424.384160] ata25.00: configured for UDMA/133
[56424.384177] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56424.384180] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
[56424.384182] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
[56424.384184] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 78 00 00 00 08 00 00
[56424.384185] critical medium error, dev sdc, sector 8966116728 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56424.394369] ata25: EH complete
[56424.599523] md/raid:md127: read error corrected (8 sectors at 8966081912 on dm-1)
[56428.528870] ata25.00: exception Emask 0x0 SAct 0xff747eff SErr 0x0 action 0x0
[56428.536009] ata25.00: irq_stat 0x40000008
[56428.540060] ata25.00: failed command: READ FPDMA QUEUED
[56428.545308] ata25.00: cmd 60/08:48:50:23:6c/00:00:16:02:00/40 tag 9 ncq dma 4096 in
                        res 43/40:08:50:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56428.561598] ata25.00: status: { DRDY SENSE ERR }
[56428.566242] ata25.00: error: { UNC }
[56428.690982] ata25.00: configured for UDMA/133
[56428.691017] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[56428.691020] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
[56428.691022] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
[56428.691023] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 50 00 00 00 08 00 00
[56428.691025] critical medium error, dev sdc, sector 8966120272 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56428.701217] ata25: EH complete
[56430.789563] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[56430.796699] ata25.00: irq_stat 0x40000008
[56430.800724] ata25.00: failed command: READ FPDMA QUEUED
[56430.805977] ata25.00: cmd 60/08:a0:58:23:6c/00:00:16:02:00/40 tag 20 ncq dma 4096 in
                        res 43/40:08:58:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
[56430.822085] ata25.00: status: { DRDY SENSE ERR }
[56430.826719] ata25.00: error: { UNC }
[56430.923535] ata25.00: configured for UDMA/133
[56430.923604] sd 24:0:0:0: [sdc] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
[56430.923607] sd 24:0:0:0: [sdc] tag#20 Sense Key : Medium Error [current] 
[56430.923608] sd 24:0:0:0: [sdc] tag#20 Add. Sense: Unrecovered read error
[56430.923610] sd 24:0:0:0: [sdc] tag#20 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 58 00 00 00 08 00 00
[56430.923611] critical medium error, dev sdc, sector 8966120280 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[56430.933789] ata25: EH complete
[56431.464686] md/raid:md127: read error corrected (8 sectors at 8966085464 on dm-1)
[56431.464684] md/raid:md127: read error corrected (8 sectors at 8966085456 on dm-1)
[57881.912653] ata25.00: exception Emask 0x0 SAct 0x843ff807 SErr 0x0 action 0x0
[57881.919808] ata25.00: irq_stat 0x40000008
[57881.923835] ata25.00: failed command: READ FPDMA QUEUED
[57881.929074] ata25.00: cmd 60/48:58:00:65:4c/05:00:1d:02:00/40 tag 11 ncq dma 692224 in
                        res 43/40:48:68:66:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
[57881.945260] ata25.00: status: { DRDY SENSE ERR }
[57881.949953] ata25.00: error: { UNC }
[57882.041190] ata25.00: configured for UDMA/133
[57882.041253] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=3s
[57882.041256] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
[57882.041258] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Data synchronization mark error
[57882.041260] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 1d 4c 65 00 00 00 05 48 00 00
[57882.041261] I/O error, dev sdc, sector 9081480448 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
[57882.050685] ata25: EH complete
[57885.724654] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[57885.731866] ata25.00: irq_stat 0x40000008
[57885.735894] ata25.00: failed command: READ FPDMA QUEUED
[57885.741217] ata25.00: cmd 60/08:20:68:66:4c/00:00:1d:02:00/40 tag 4 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57885.757076] ata25.00: status: { DRDY SENSE ERR }
[57885.761721] ata25.00: error: { UNC }
[57885.898142] ata25.00: configured for UDMA/133
[57885.898197] ata25: EH complete
[57888.362234] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[57888.369399] ata25.00: irq_stat 0x40000008
[57888.373430] ata25.00: failed command: READ FPDMA QUEUED
[57888.378665] ata25.00: cmd 60/08:60:68:66:4c/00:00:1d:02:00/40 tag 12 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57888.394601] ata25.00: status: { DRDY SENSE ERR }
[57888.399243] ata25.00: error: { UNC }
[57888.488931] ata25.00: configured for UDMA/133
[57888.488992] ata25: EH complete
[57891.013850] ata25.00: exception Emask 0x0 SAct 0xbffc00b6 SErr 0x0 action 0x0
[57891.020998] ata25.00: irq_stat 0x40000008
[57891.025037] ata25.00: failed command: READ FPDMA QUEUED
[57891.030565] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57891.046568] ata25.00: status: { DRDY SENSE ERR }
[57891.051257] ata25.00: error: { UNC }
[57891.154641] ata25.00: configured for UDMA/133
[57891.154667] ata25: EH complete
[57893.666329] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[57893.673470] ata25.00: irq_stat 0x40000008
[57893.677495] ata25.00: failed command: READ FPDMA QUEUED
[57893.682722] ata25.00: cmd 60/08:48:68:66:4c/00:00:1d:02:00/40 tag 9 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57893.698570] ata25.00: status: { DRDY SENSE ERR }
[57893.703198] ata25.00: error: { UNC }
[57893.795404] ata25.00: configured for UDMA/133
[57893.795458] ata25: EH complete
[57896.285432] ata25.00: exception Emask 0x0 SAct 0x3ffc0020 SErr 0x0 action 0x0
[57896.292651] ata25.00: irq_stat 0x40000008
[57896.296685] ata25.00: failed command: READ FPDMA QUEUED
[57896.301920] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57896.317965] ata25.00: status: { DRDY SENSE ERR }
[57896.322722] ata25.00: error: { UNC }
[57896.419495] ata25.00: configured for UDMA/133
[57896.419516] ata25: EH complete
[57898.960626] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
[57898.967775] ata25.00: irq_stat 0x40000008
[57898.971808] ata25.00: failed command: READ FPDMA QUEUED
[57898.977066] ata25.00: cmd 60/08:70:68:66:4c/00:00:1d:02:00/40 tag 14 ncq dma 4096 in
                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57898.993042] ata25.00: status: { DRDY SENSE ERR }
[57898.997677] ata25.00: error: { UNC }
[57899.085209] ata25.00: configured for UDMA/133
[57899.085258] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
[57899.085260] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
[57899.085262] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Data synchronization mark error
[57899.085264] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 1d 4c 66 68 00 00 00 08 00 00
[57899.085265] I/O error, dev sdc, sector 9081480808 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[57899.094430] ata25: EH complete
[57900.557539] md/raid:md127: read error corrected (8 sectors at 9081445992 on dm-1)
[57903.266393] ata25.00: exception Emask 0x0 SAct 0x7fe0c3ff SErr 0x0 action 0x0
[57903.273531] ata25.00: irq_stat 0x40000008
[57903.277559] ata25.00: failed command: READ FPDMA QUEUED
[57903.282790] ata25.00: cmd 60/10:a8:00:70:4c/05:00:1d:02:00/40 tag 21 ncq dma 663552 in
                        res 43/40:10:40:74:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
[57903.298899] ata25.00: status: { DRDY SENSE ERR }
[57903.303523] ata25.00: error: { UNC }
[57903.433667] ata25.00: configured for UDMA/133
[57903.433735] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[57903.433738] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
[57903.433740] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Data synchronization mark error
[57903.433742] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 1d 4c 70 00 00 00 05 10 00 00
[57903.433743] I/O error, dev sdc, sector 9081483264 op 0x0:(READ) flags 0x80700 phys_seg 162 prio class 0
[57903.443152] ata25: EH complete
[57906.791480] ata25.00: exception Emask 0x0 SAct 0x7f0fffff SErr 0x0 action 0x0
[57906.798705] ata25.00: irq_stat 0x40000008
[57906.802742] ata25.00: failed command: READ FPDMA QUEUED
[57906.807980] ata25.00: cmd 60/08:c0:40:74:4c/00:00:1d:02:00/40 tag 24 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57906.823914] ata25.00: status: { DRDY SENSE ERR }
[57906.828540] ata25.00: error: { UNC }
[57906.990783] ata25.00: configured for UDMA/133
[57906.990830] ata25: EH complete
[57909.373378] ata25.00: exception Emask 0x0 SAct 0x7800000f SErr 0x0 action 0x0
[57909.380521] ata25.00: irq_stat 0x40000008
[57909.384548] ata25.00: failed command: READ FPDMA QUEUED
[57909.389785] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57909.405722] ata25.00: status: { DRDY SENSE ERR }
[57909.410352] ata25.00: error: { UNC }
[57909.548237] ata25.00: configured for UDMA/133
[57909.548284] ata25: EH complete
[57911.907322] ata25.00: exception Emask 0x0 SAct 0x7003feff SErr 0x0 action 0x0
[57911.914465] ata25.00: irq_stat 0x40000008
[57911.918490] ata25.00: failed command: READ FPDMA QUEUED
[57911.923726] ata25.00: cmd 60/08:e8:40:74:4c/00:00:1d:02:00/40 tag 29 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57911.939665] ata25.00: status: { DRDY SENSE ERR }
[57911.944297] ata25.00: error: { UNC }
[57912.065349] ata25.00: configured for UDMA/133
[57912.065458] ata25: EH complete
[57914.457377] ata25.00: exception Emask 0x0 SAct 0xfff07fff SErr 0x0 action 0x0
[57914.464516] ata25.00: irq_stat 0x40000008
[57914.468545] ata25.00: failed command: READ FPDMA QUEUED
[57914.473780] ata25.00: cmd 60/08:50:40:74:4c/00:00:1d:02:00/40 tag 10 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57914.489711] ata25.00: status: { DRDY SENSE ERR }
[57914.494339] ata25.00: error: { UNC }
[57914.614467] ata25.00: configured for UDMA/133
[57914.614561] ata25: EH complete
[57917.090724] ata25.00: exception Emask 0x0 SAct 0x7ffc3fff SErr 0x0 action 0x0
[57917.097865] ata25.00: irq_stat 0x40000008
[57917.101902] ata25.00: failed command: READ FPDMA QUEUED
[57917.107138] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57917.123080] ata25.00: status: { DRDY SENSE ERR }
[57917.127815] ata25.00: error: { UNC }
[57917.246905] ata25.00: configured for UDMA/133
[57917.247000] ata25: EH complete
[57919.617397] ata25.00: exception Emask 0x0 SAct 0xfe7fc7ff SErr 0x0 action 0x0
[57919.624542] ata25.00: irq_stat 0x40000008
[57919.628570] ata25.00: failed command: READ FPDMA QUEUED
[57919.633802] ata25.00: cmd 60/08:40:40:74:4c/00:00:1d:02:00/40 tag 8 ncq dma 4096 in
                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
[57919.649663] ata25.00: status: { DRDY SENSE ERR }
[57919.654296] ata25.00: error: { UNC }
[57919.769685] ata25.00: configured for UDMA/133
[57919.769797] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
[57919.769800] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
[57919.769803] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Data synchronization mark error
[57919.769805] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 1d 4c 74 40 00 00 00 08 00 00
[57919.769807] I/O error, dev sdc, sector 9081484352 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
[57919.778997] ata25: EH complete
[57919.882467] md/raid:md127: read error corrected (8 sectors at 9081449536 on dm-1)
[64266.546763] INFO: task btrfs-transacti:8202 blocked for more than 122 seconds.
[64266.553999]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.560013] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.567852] task:btrfs-transacti state:D stack:0     pid:8202  tgid:8202  ppid:2      flags:0x00004000
[64266.567855] Call Trace:
[64266.567856]  <TASK>
[64266.567858]  __schedule+0x3d5/0x1520
[64266.567865]  schedule+0x27/0xf0
[64266.567867]  wait_for_commit+0x11f/0x1d0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.567898]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.567902]  btrfs_commit_transaction+0xbb6/0xc80 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.567924]  transaction_kthread+0x159/0x1c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.567943]  ? __pfx_transaction_kthread+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.567958]  kthread+0xcf/0x100
[64266.567960]  ? __pfx_kthread+0x10/0x10
[64266.567962]  ret_from_fork+0x31/0x50
[64266.567964]  ? __pfx_kthread+0x10/0x10
[64266.567966]  ret_from_fork_asm+0x1a/0x30
[64266.567969]  </TASK>
[64266.567998] INFO: task kworker/u64:57:96535 blocked for more than 122 seconds.
[64266.575240]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.581256] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.589077] task:kworker/u64:57  state:D stack:0     pid:96535 tgid:96535 ppid:2      flags:0x00004000
[64266.589080] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.589084] Call Trace:
[64266.589085]  <TASK>
[64266.589086]  __schedule+0x3d5/0x1520
[64266.589091]  schedule+0x27/0xf0
[64266.589093]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.589100]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.589102]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.589107]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.589109]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589133]  ? __pfx_woken_wake_function+0x10/0x10
[64266.589139]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.589146]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.589147]  __submit_bio+0x168/0x240
[64266.589151]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.589153]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589176]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.589179]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589198]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589218]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589235]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.589237]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589254]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589271]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589293]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589310]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.589325]  do_writepages+0x7e/0x270
[64266.589329]  __writeback_single_inode+0x41/0x340
[64266.589331]  ? wbc_detach_inode+0x116/0x240
[64266.589333]  writeback_sb_inodes+0x21c/0x4f0
[64266.589341]  __writeback_inodes_wb+0x4c/0xf0
[64266.589343]  wb_writeback+0x193/0x310
[64266.589347]  wb_workfn+0x2a5/0x440
[64266.589350]  process_one_work+0x17b/0x330
[64266.589352]  worker_thread+0x2e2/0x410
[64266.589354]  ? __pfx_worker_thread+0x10/0x10
[64266.589355]  kthread+0xcf/0x100
[64266.589357]  ? __pfx_kthread+0x10/0x10
[64266.589359]  ret_from_fork+0x31/0x50
[64266.589361]  ? __pfx_kthread+0x10/0x10
[64266.589362]  ret_from_fork_asm+0x1a/0x30
[64266.589366]  </TASK>
[64266.589367] INFO: task kworker/u64:80:96615 blocked for more than 122 seconds.
[64266.596592]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.602597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.610419] task:kworker/u64:80  state:D stack:0     pid:96615 tgid:96615 ppid:2      flags:0x00004000
[64266.610422] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.610424] Call Trace:
[64266.610425]  <TASK>
[64266.610426]  __schedule+0x3d5/0x1520
[64266.610430]  schedule+0x27/0xf0
[64266.610432]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.610436]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.610439]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.610443]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.610444]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610461]  ? __pfx_woken_wake_function+0x10/0x10
[64266.610465]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.610469]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.610471]  __submit_bio+0x168/0x240
[64266.610473]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.610476]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610495]  ? __extent_writepage_io+0x21f/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610511]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610529]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610545]  extent_write_cache_pages+0x397/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610565]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610581]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.610597]  do_writepages+0x7e/0x270
[64266.610600]  __writeback_single_inode+0x41/0x340
[64266.610601]  ? wbc_detach_inode+0x116/0x240
[64266.610604]  writeback_sb_inodes+0x21c/0x4f0
[64266.610611]  __writeback_inodes_wb+0x4c/0xf0
[64266.610614]  wb_writeback+0x193/0x310
[64266.610617]  wb_workfn+0xc4/0x440
[64266.610620]  process_one_work+0x17b/0x330
[64266.610622]  worker_thread+0x2e2/0x410
[64266.610624]  ? __pfx_worker_thread+0x10/0x10
[64266.610625]  kthread+0xcf/0x100
[64266.610627]  ? __pfx_kthread+0x10/0x10
[64266.610629]  ret_from_fork+0x31/0x50
[64266.610630]  ? __pfx_kthread+0x10/0x10
[64266.610632]  ret_from_fork_asm+0x1a/0x30
[64266.610635]  </TASK>
[64266.610636] INFO: task kworker/u64:102:96624 blocked for more than 122 seconds.
[64266.617953]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.623962] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.631784] task:kworker/u64:102 state:D stack:0     pid:96624 tgid:96624 ppid:2      flags:0x00004000
[64266.631786] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.631789] Call Trace:
[64266.631790]  <TASK>
[64266.631791]  __schedule+0x3d5/0x1520
[64266.631795]  schedule+0x27/0xf0
[64266.631797]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.631800]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.631803]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.631807]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.631808]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631824]  ? __pfx_woken_wake_function+0x10/0x10
[64266.631828]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.631832]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.631834]  __submit_bio+0x168/0x240
[64266.631836]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.631839]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631857]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.631860]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631877]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631893]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631908]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.631910]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631926]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631942]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631962]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631978]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.631993]  do_writepages+0x7e/0x270
[64266.631996]  __writeback_single_inode+0x41/0x340
[64266.631998]  ? wbc_detach_inode+0x116/0x240
[64266.632000]  writeback_sb_inodes+0x21c/0x4f0
[64266.632007]  __writeback_inodes_wb+0x4c/0xf0
[64266.632010]  wb_writeback+0x193/0x310
[64266.632013]  wb_workfn+0x2a5/0x440
[64266.632016]  process_one_work+0x17b/0x330
[64266.632018]  worker_thread+0x2e2/0x410
[64266.632020]  ? __pfx_worker_thread+0x10/0x10
[64266.632021]  kthread+0xcf/0x100
[64266.632023]  ? __pfx_kthread+0x10/0x10
[64266.632025]  ret_from_fork+0x31/0x50
[64266.632026]  ? __pfx_kthread+0x10/0x10
[64266.632028]  ret_from_fork_asm+0x1a/0x30
[64266.632031]  </TASK>
[64266.632032] INFO: task kworker/u64:25:100536 blocked for more than 122 seconds.
[64266.639341]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.645348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.653171] task:kworker/u64:25  state:D stack:0     pid:100536 tgid:100536 ppid:2      flags:0x00004000
[64266.653173] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.653175] Call Trace:
[64266.653176]  <TASK>
[64266.653177]  __schedule+0x3d5/0x1520
[64266.653181]  schedule+0x27/0xf0
[64266.653183]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.653187]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.653189]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.653193]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.653194]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653210]  ? __pfx_woken_wake_function+0x10/0x10
[64266.653214]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.653218]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.653220]  __submit_bio+0x168/0x240
[64266.653222]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.653225]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653244]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.653246]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653264]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653279]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653295]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.653297]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653312]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653328]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653348]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653364]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.653381]  do_writepages+0x7e/0x270
[64266.653383]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.653385]  ? select_task_rq_fair+0x7f8/0x1da0
[64266.653388]  __writeback_single_inode+0x41/0x340
[64266.653390]  ? wbc_detach_inode+0x116/0x240
[64266.653392]  writeback_sb_inodes+0x21c/0x4f0
[64266.653399]  __writeback_inodes_wb+0x4c/0xf0
[64266.653402]  wb_writeback+0x193/0x310
[64266.653405]  wb_workfn+0xc4/0x440
[64266.653408]  process_one_work+0x17b/0x330
[64266.653410]  worker_thread+0x2e2/0x410
[64266.653412]  ? __pfx_worker_thread+0x10/0x10
[64266.653413]  kthread+0xcf/0x100
[64266.653415]  ? __pfx_kthread+0x10/0x10
[64266.653417]  ret_from_fork+0x31/0x50
[64266.653418]  ? __pfx_kthread+0x10/0x10
[64266.653420]  ret_from_fork_asm+0x1a/0x30
[64266.653423]  </TASK>
[64266.653424] INFO: task kworker/u64:52:101215 blocked for more than 122 seconds.
[64266.660726]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.666738] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.674556] task:kworker/u64:52  state:D stack:0     pid:101215 tgid:101215 ppid:2      flags:0x00004000
[64266.674558] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.674560] Call Trace:
[64266.674561]  <TASK>
[64266.674562]  __schedule+0x3d5/0x1520
[64266.674566]  schedule+0x27/0xf0
[64266.674568]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.674572]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.674575]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.674579]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.674580]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674596]  ? __pfx_woken_wake_function+0x10/0x10
[64266.674599]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.674604]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.674605]  __submit_bio+0x168/0x240
[64266.674608]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.674610]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674629]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.674631]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674649]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674665]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674680]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.674682]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674697]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674713]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674733]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674748]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.674764]  do_writepages+0x7e/0x270
[64266.674767]  __writeback_single_inode+0x41/0x340
[64266.674769]  ? wbc_detach_inode+0x116/0x240
[64266.674771]  writeback_sb_inodes+0x21c/0x4f0
[64266.674778]  __writeback_inodes_wb+0x4c/0xf0
[64266.674780]  wb_writeback+0x193/0x310
[64266.674784]  wb_workfn+0x2a5/0x440
[64266.674787]  process_one_work+0x17b/0x330
[64266.674789]  worker_thread+0x2e2/0x410
[64266.674791]  ? __pfx_worker_thread+0x10/0x10
[64266.674792]  kthread+0xcf/0x100
[64266.674794]  ? __pfx_kthread+0x10/0x10
[64266.674795]  ret_from_fork+0x31/0x50
[64266.674797]  ? __pfx_kthread+0x10/0x10
[64266.674799]  ret_from_fork_asm+0x1a/0x30
[64266.674802]  </TASK>
[64266.674803] INFO: task kworker/u64:71:101293 blocked for more than 123 seconds.
[64266.682113]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.688120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.695943] task:kworker/u64:71  state:D stack:0     pid:101293 tgid:101293 ppid:2      flags:0x00004000
[64266.695945] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.695948] Call Trace:
[64266.695949]  <TASK>
[64266.695950]  __schedule+0x3d5/0x1520
[64266.695954]  schedule+0x27/0xf0
[64266.695956]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.695960]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.695962]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.695966]  ? __pfx_woken_wake_function+0x10/0x10
[64266.695968]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.695969]  ? blk_cgroup_bio_start+0x8c/0xd0
[64266.695973]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.695978]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.695979]  __submit_bio+0x168/0x240
[64266.695982]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.695985]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696003]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.696005]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696023]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696038]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696054]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.696056]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696071]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696087]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696107]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696122]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.696138]  do_writepages+0x7e/0x270
[64266.696141]  __writeback_single_inode+0x41/0x340
[64266.696143]  ? wbc_detach_inode+0x116/0x240
[64266.696145]  writeback_sb_inodes+0x21c/0x4f0
[64266.696152]  __writeback_inodes_wb+0x4c/0xf0
[64266.696155]  wb_writeback+0x193/0x310
[64266.696158]  wb_workfn+0x34b/0x440
[64266.696161]  process_one_work+0x17b/0x330
[64266.696163]  worker_thread+0x2e2/0x410
[64266.696165]  ? __pfx_worker_thread+0x10/0x10
[64266.696166]  kthread+0xcf/0x100
[64266.696168]  ? __pfx_kthread+0x10/0x10
[64266.696169]  ret_from_fork+0x31/0x50
[64266.696171]  ? __pfx_kthread+0x10/0x10
[64266.696173]  ret_from_fork_asm+0x1a/0x30
[64266.696176]  </TASK>
[64266.696177] INFO: task kworker/u64:83:101621 blocked for more than 123 seconds.
[64266.703487]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.709494] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.717308] task:kworker/u64:83  state:D stack:0     pid:101621 tgid:101621 ppid:2      flags:0x00004000
[64266.717310] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.717313] Call Trace:
[64266.717313]  <TASK>
[64266.717315]  __schedule+0x3d5/0x1520
[64266.717319]  schedule+0x27/0xf0
[64266.717320]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.717324]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.717327]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.717331]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.717332]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717348]  ? __pfx_woken_wake_function+0x10/0x10
[64266.717351]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.717356]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.717357]  __submit_bio+0x168/0x240
[64266.717360]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.717363]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717381]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.717383]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717401]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717416]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717432]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.717433]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717449]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717465]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717485]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717501]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.717516]  do_writepages+0x7e/0x270
[64266.717518]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.717519]  ? autoremove_wake_function+0x15/0x60
[64266.717522]  __writeback_single_inode+0x41/0x340
[64266.717523]  ? wbc_detach_inode+0x116/0x240
[64266.717526]  writeback_sb_inodes+0x21c/0x4f0
[64266.717533]  __writeback_inodes_wb+0x4c/0xf0
[64266.717535]  wb_writeback+0x193/0x310
[64266.717539]  wb_workfn+0x2a5/0x440
[64266.717542]  process_one_work+0x17b/0x330
[64266.717544]  worker_thread+0x2e2/0x410
[64266.717546]  ? __pfx_worker_thread+0x10/0x10
[64266.717547]  kthread+0xcf/0x100
[64266.717548]  ? __pfx_kthread+0x10/0x10
[64266.717550]  ret_from_fork+0x31/0x50
[64266.717552]  ? __pfx_kthread+0x10/0x10
[64266.717554]  ret_from_fork_asm+0x1a/0x30
[64266.717557]  </TASK>
[64266.717558] INFO: task kworker/u64:88:101623 blocked for more than 123 seconds.
[64266.724865]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.730873] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.738691] task:kworker/u64:88  state:D stack:0     pid:101623 tgid:101623 ppid:2      flags:0x00004000
[64266.738693] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.738695] Call Trace:
[64266.738696]  <TASK>
[64266.738697]  __schedule+0x3d5/0x1520
[64266.738701]  schedule+0x27/0xf0
[64266.738703]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.738707]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.738709]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.738714]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.738715]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738731]  ? __pfx_woken_wake_function+0x10/0x10
[64266.738734]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.738739]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.738740]  __submit_bio+0x168/0x240
[64266.738743]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.738745]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738764]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.738766]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738788]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738804]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738819]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.738821]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738836]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738852]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738872]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738887]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.738902]  do_writepages+0x7e/0x270
[64266.738905]  __writeback_single_inode+0x41/0x340
[64266.738907]  ? wbc_detach_inode+0x116/0x240
[64266.738909]  writeback_sb_inodes+0x21c/0x4f0
[64266.738917]  __writeback_inodes_wb+0x4c/0xf0
[64266.738919]  wb_writeback+0x193/0x310
[64266.738922]  wb_workfn+0x2a5/0x440
[64266.738926]  process_one_work+0x17b/0x330
[64266.738928]  worker_thread+0x2e2/0x410
[64266.738929]  ? __pfx_worker_thread+0x10/0x10
[64266.738931]  kthread+0xcf/0x100
[64266.738932]  ? __pfx_kthread+0x10/0x10
[64266.738934]  ret_from_fork+0x31/0x50
[64266.738936]  ? __pfx_kthread+0x10/0x10
[64266.738937]  ret_from_fork_asm+0x1a/0x30
[64266.738941]  </TASK>
[64266.738942] INFO: task kworker/u64:108:102161 blocked for more than 123 seconds.
[64266.746331]       Tainted: P           OE      6.10.5-arch1-1 #1
[64266.752336] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[64266.760152] task:kworker/u64:108 state:D stack:0     pid:102161 tgid:102161 ppid:2      flags:0x00004000
[64266.760154] Workqueue: writeback wb_workfn (flush-btrfs-1)
[64266.760156] Call Trace:
[64266.760157]  <TASK>
[64266.760158]  __schedule+0x3d5/0x1520
[64266.760162]  schedule+0x27/0xf0
[64266.760164]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.760168]  ? __pfx_autoremove_wake_function+0x10/0x10
[64266.760170]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
[64266.760175]  ? __pfx_woken_wake_function+0x10/0x10
[64266.760176]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.760177]  ? blk_cgroup_bio_start+0x8c/0xd0
[64266.760181]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
[64266.760185]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.760187]  __submit_bio+0x168/0x240
[64266.760189]  submit_bio_noacct_nocheck+0x197/0x3e0
[64266.760192]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760210]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.760213]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760230]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760246]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760261]  ? folio_clear_dirty_for_io+0x121/0x190
[64266.760263]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760278]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760294]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760314]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760330]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
[64266.760345]  do_writepages+0x7e/0x270
[64266.760346]  ? srso_alias_return_thunk+0x5/0xfbef5
[64266.760349]  __writeback_single_inode+0x41/0x340
[64266.760351]  ? wbc_detach_inode+0x116/0x240
[64266.760353]  writeback_sb_inodes+0x21c/0x4f0
[64266.760360]  __writeback_inodes_wb+0x4c/0xf0
[64266.760363]  wb_writeback+0x193/0x310
[64266.760366]  wb_workfn+0x2a5/0x440
[64266.760369]  process_one_work+0x17b/0x330
[64266.760371]  worker_thread+0x2e2/0x410
[64266.760373]  ? __pfx_worker_thread+0x10/0x10
[64266.760374]  kthread+0xcf/0x100
[64266.760376]  ? __pfx_kthread+0x10/0x10
[64266.760378]  ret_from_fork+0x31/0x50
[64266.760380]  ? __pfx_kthread+0x10/0x10
[64266.760381]  ret_from_fork_asm+0x1a/0x30
[64266.760385]  </TASK>
[64266.760385] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
[85040.962654] systemd-coredump[137770]: Process 1259 (systemd-journal) of user 0 terminated abnormally with signal 6/ABRT, processing...

[-- Attachment #3: lsblk.txt --]
[-- Type: text/plain, Size: 1860 bytes --]

sda           8:0    0  14.6T  0 disk  
└─sda1        8:1    0  14.5T  0 part  
  └─c07     254:3    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdb           8:16   0  14.6T  0 disk  
└─sdb1        8:17   0  14.5T  0 part  
  └─c09     254:4    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdc           8:32   0  14.6T  0 disk  
└─sdc1        8:33   0  14.5T  0 part  
  └─c05     254:1    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdd           8:48   0  14.6T  0 disk  
└─sdd1        8:49   0  14.5T  0 part  
  └─c10     254:9    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sde           8:64   0  14.6T  0 disk  
└─sde1        8:65   0  14.5T  0 part  
  └─c06     254:10   0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdf           8:80   0  14.6T  0 disk  
└─sdf1        8:81   0  14.5T  0 part  
  └─c08     254:8    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdg           8:96   0  14.6T  0 disk  
└─sdg1        8:97   0  14.5T  0 part  
  └─c03     254:7    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdh           8:112  0  14.6T  0 disk  
└─sdh1        8:113  0  14.5T  0 part  
  └─c04     254:6    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdi           8:128  0  14.6T  0 disk  
└─sdi1        8:129  0  14.5T  0 part  
  └─c01     254:2    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
sdj           8:144  0  14.6T  0 disk  
└─sdj1        8:145  0  14.5T  0 part  
  └─c02     254:5    0  14.5T  0 crypt 
    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd

[-- Attachment #4: mdstat.txt --]
[-- Type: text/plain, Size: 349 bytes --]

[root@coldnas ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md127 : active raid6 dm-10[5] dm-9[9] dm-8[7] dm-7[2] dm-6[3] dm-5[1] dm-4[8] dm-3[6] dm-2[0] dm-1[4]
      124972269568 blocks super 1.2 level 6, 4096k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      bitmap: 3/117 pages [12KB], 65536KB chunk

unused devices: <none>

[-- Attachment #5: ps_dead.txt --]
[-- Type: text/plain, Size: 127024 bytes --]

ps aux | grep "        D"
root        8202  0.2  0.0      0     0 ?        D    Aug18   4:11 [btrfs-transaction]
hasi        9117  0.4  0.0   4544  2628 ?        Ds   Aug18   6:33 /usr/lib/ssh/sftp-server
hasi        9143  0.4  0.0   4572  2756 ?        Ds   Aug18   6:37 /usr/lib/ssh/sftp-server
hasi        9180  1.1  0.0   4452  2796 ?        Ds   Aug18  16:56 /usr/lib/ssh/sftp-server
hasi       30949  0.4  0.0   4488  2556 ?        Ds   Aug18   4:55 /usr/lib/ssh/sftp-server
hasi       31971  0.3  0.0   4512  2628 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
hasi       32277  0.3  0.0   4592  2580 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
hasi       32379  0.3  0.0   4540  2692 ?        Ds   Aug18   3:58 /usr/lib/ssh/sftp-server
hasi       32694  0.3  0.0   4536  2976 ?        Ds   Aug18   3:54 /usr/lib/ssh/sftp-server
hasi       34038  0.2  0.0   4536  2736 ?        Ds   Aug18   2:59 /usr/lib/ssh/sftp-server
hasi       35651  0.2  0.0   4568  2556 ?        Ds   01:18   1:56 /usr/lib/ssh/sftp-server
hasi       35767  0.2  0.0   4604  2624 ?        Ds   01:21   2:03 /usr/lib/ssh/sftp-server
hasi       35771  0.2  0.0   4464  2556 ?        Ds   01:21   2:07 /usr/lib/ssh/sftp-server
hasi       35816  0.2  0.0   4512  2864 ?        Ds   01:22   2:07 /usr/lib/ssh/sftp-server
hasi       36497  0.2  0.0   4636  2864 ?        Ds   01:37   2:01 /usr/lib/ssh/sftp-server
hasi       36504  0.2  0.0   4548  2692 ?        Ds   01:37   2:06 /usr/lib/ssh/sftp-server
hasi       36511  0.2  0.0   4540  2864 ?        Ds   01:37   1:48 /usr/lib/ssh/sftp-server
hasi       43900  0.1  0.0   4456  2568 ?        Ds   02:47   1:36 /usr/lib/ssh/sftp-server
hasi       55877  0.1  0.0   4476  2756 ?        Ds   03:22   1:17 /usr/lib/ssh/sftp-server
hasi       55917  0.1  0.0   4516  2848 ?        Ds   03:24   1:16 /usr/lib/ssh/sftp-server
hasi       56055  0.1  0.0   4516  2456 ?        Ds   03:27   1:21 /usr/lib/ssh/sftp-server
hasi       56075  0.1  0.0   4380  2584 ?        Ds   03:28   1:26 /usr/lib/ssh/sftp-server
hasi       56086  0.1  0.0   4512  2712 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
hasi       56090  0.1  0.0   4556  2928 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
hasi       56231  0.1  0.0   4508  2752 ?        Ds   03:29   1:15 /usr/lib/ssh/sftp-server
hasi       57837  0.1  0.0   4408  2500 ?        Ds   03:33   1:15 /usr/lib/ssh/sftp-server
hasi       57840  0.1  0.0   4460  2944 ?        Ds   03:33   1:12 /usr/lib/ssh/sftp-server
hasi       85960  0.1  0.0   4552  2816 ?        Ds   05:00   1:03 /usr/lib/ssh/sftp-server
hasi       85967  0.1  0.0   4508  2648 ?        Ds   05:00   0:41 /usr/lib/ssh/sftp-server
hasi       85980  0.1  0.0   4520  2568 ?        Ds   05:00   0:55 /usr/lib/ssh/sftp-server
hasi       85985  0.1  0.0   4496  2784 ?        Ds   05:00   0:53 /usr/lib/ssh/sftp-server
root       96535  0.8  0.0      0     0 ?        D    05:35   5:46 [kworker/u64:57+flush-btrfs-1]
root       96615  0.6  0.0      0     0 ?        D    05:35   4:33 [kworker/u64:80+flush-btrfs-1]
root       96624  0.6  0.0      0     0 ?        D    05:35   4:31 [kworker/u64:102+flush-btrfs-1]
root      100536  0.6  0.0      0     0 ?        D    05:53   4:25 [kworker/u64:25+flush-btrfs-1]
root      101215  0.6  0.0      0     0 ?        D    05:57   3:56 [kworker/u64:52+flush-btrfs-1]
root      101293  0.7  0.0      0     0 ?        D    05:57   4:53 [kworker/u64:71+flush-btrfs-1]
root      101621  0.6  0.0      0     0 ?        D    05:59   3:56 [kworker/u64:83+flush-btrfs-1]
root      101623  0.6  0.0      0     0 ?        D    05:59   3:57 [kworker/u64:88+flush-btrfs-1]
root      102161  0.6  0.0      0     0 ?        D    06:01   3:55 [kworker/u64:108+flush-btrfs-1]
root      103370  0.5  0.0      0     0 ?        D    06:08   3:43 [kworker/u64:144+flush-btrfs-1]
root      108317  0.6  0.0      0     0 ?        D    06:30   3:48 [kworker/u64:24+flush-btrfs-1]
root      109537  0.6  0.0      0     0 ?        D    06:37   3:40 [kworker/u64:4+flush-btrfs-1]
root      109550  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:26+flush-btrfs-1]
root      109553  0.6  0.0      0     0 ?        D    06:37   3:41 [kworker/u64:32+flush-btrfs-1]
root      109555  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:34+flush-btrfs-1]
root      109559  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:42+flush-btrfs-1]
root      109564  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:48+flush-btrfs-1]
root      115503  0.2  0.0      0     0 ?        D    07:37   1:26 [kworker/u64:6+flush-btrfs-1]
root      118242  0.1  0.0      0     0 ?        D    08:08   0:47 [kworker/u64:68+flush-btrfs-1]
root      119894  0.1  0.0      0     0 ?        D    08:22   0:36 [kworker/u64:14+flush-btrfs-1]
root      120633  0.1  0.0      0     0 ?        D    08:28   0:43 [kworker/u64:5+flush-btrfs-1]
root      120638  0.0  0.0      0     0 ?        D    08:28   0:20 [kworker/u64:23+flush-btrfs-1]
root      120642  0.0  0.0      0     0 ?        D    08:28   0:28 [kworker/u64:35+flush-btrfs-1]
root      122280  0.0  0.0      0     0 ?        D    08:36   0:20 [kworker/u64:0+flush-btrfs-1]
root      122669  0.0  0.0      0     0 ?        D    08:39   0:11 [kworker/u64:79+flush-btrfs-1]
root      123019  0.0  0.0      0     0 ?        D    08:42   0:27 [kworker/u64:85+blkcg_punt_bio]
root      123028  0.0  0.0      0     0 ?        D    08:42   0:21 [kworker/u64:95+flush-btrfs-1]
root      124094  0.0  0.0      0     0 ?        D    08:51   0:08 [kworker/u64:29+flush-btrfs-1]
root      124101  0.0  0.0      0     0 ?        D    08:51   0:20 [kworker/u64:91+flush-btrfs-1]
root      124177  0.0  0.0      0     0 ?        D    08:51   0:19 [kworker/u64:106+flush-btrfs-1]
root      124418  0.0  0.0      0     0 ?        D    08:53   0:20 [kworker/u64:114+flush-btrfs-1]
root      124423  0.0  0.0      0     0 ?        D    08:53   0:15 [kworker/u64:119+flush-btrfs-1]
root      124428  0.0  0.0      0     0 ?        D    08:53   0:13 [kworker/u64:124+flush-btrfs-1]
root      124628  0.0  0.0      0     0 ?        D    08:55   0:22 [kworker/u64:132+flush-btrfs-1]
root      124629  0.0  0.0      0     0 ?        D    08:55   0:20 [kworker/u64:133+flush-btrfs-1]
root      124630  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:134+flush-btrfs-1]
root      124633  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:137+flush-btrfs-1]
root      124636  0.0  0.0      0     0 ?        D    08:55   0:07 [kworker/u64:140+blkcg_punt_bio]
root      124637  0.0  0.0      0     0 ?        D    08:55   0:23 [kworker/u64:141+flush-btrfs-1]
work      125055  0.0  0.0 156020 22868 ?        Dl   08:59   0:15 smbd: client [192.168.67.33]
root      126064  0.0  0.0      0     0 ?        D    09:07   0:18 [kworker/u64:28+flush-btrfs-1]
hasi      127483  0.0  0.0   2604  1776 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
hasi      127494  0.0  0.0   2604  1704 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
hasi      127501  0.0  0.0   2604  1688 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
hasi      127508  0.0  0.0   2604  1772 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
hasi      127520  0.0  0.0   2604  1804 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
hasi      127527  0.0  0.0   2604  1864 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
hasi      127543  0.0  0.0   2604  1732 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
hasi      127551  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
hasi      127558  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
hasi      127565  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
hasi      127572  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
hasi      127579  0.0  0.0   2604  1864 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
hasi      127600  0.0  0.0   2604  1688 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
hasi      127608  0.0  0.0   2604  1700 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
hasi      127617  0.0  0.0   2604  1732 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
hasi      127624  0.0  0.0   2604  1728 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
hasi      127631  0.0  0.0   2604  1908 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
hasi      127646  0.0  0.0   2604  1596 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
hasi      127654  0.0  0.0   2604  2044 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
hasi      127664  0.0  0.0   2604  1732 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
hasi      127676  0.0  0.0   2604  1752 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
hasi      127683  0.0  0.0   2604  1728 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
hasi      127701  0.0  0.0   2604  1776 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
hasi      127709  0.0  0.0   2604  1688 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
hasi      127716  0.0  0.0   2604  1872 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
hasi      127723  0.0  0.0   2604  1700 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
hasi      127730  0.0  0.0   2736  1960 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
hasi      127741  0.0  0.0   2604  1844 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
root      127746  0.0  0.0      0     0 ?        D    09:39   0:00 [kworker/u64:1+flush-btrfs-1]
hasi      127760  0.0  0.0   2604  1704 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
hasi      127768  0.0  0.0   2604  1872 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
hasi      127777  0.0  0.0   2604  1944 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
hasi      127784  0.0  0.0   2604  1732 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
hasi      127791  0.0  0.0   2604  1712 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
hasi      127803  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
hasi      127811  0.0  0.0   2604  1596 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
hasi      127818  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
hasi      127830  0.0  0.0   2604  1776 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
hasi      127837  0.0  0.0   2604  1772 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
hasi      127848  0.0  0.0   2604  1728 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
hasi      127856  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
hasi      127863  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
hasi      127870  0.0  0.0   2604  1904 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
hasi      127878  0.0  0.0   2604  1700 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
hasi      127885  0.0  0.0   2604  1716 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
hasi      127899  0.0  0.0   2604  1864 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
root      127900  0.0  0.0      0     0 ?        D    09:46   0:00 [kworker/u64:2+flush-btrfs-1]
hasi      127908  0.0  0.0   2604  1716 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
hasi      127917  0.0  0.0   2604  2072 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
hasi      127929  0.0  0.0   2604  1732 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
hasi      127936  0.0  0.0   2604  2000 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
hasi      127948  0.0  0.0   2604  1776 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
hasi      127956  0.0  0.0   2604  1688 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
root      127957  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:3+flush-btrfs-1]
root      127958  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:7+flush-btrfs-1]
hasi      127965  0.0  0.0   2604  1824 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
hasi      127980  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
hasi      127987  0.0  0.0   2604  2016 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
hasi      128005  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
hasi      128013  0.0  0.0   2604  1752 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
hasi      128020  0.0  0.0   2604  1700 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
hasi      128027  0.0  0.0   2604  1732 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
hasi      128034  0.0  0.0   2604  1856 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
hasi      128043  0.0  0.0   2604  1776 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
hasi      128057  0.0  0.0   2604  1992 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
root      128062  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:8+flush-btrfs-1]
root      128063  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:9+flush-btrfs-1]
hasi      128071  0.0  0.0   2604  1916 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
hasi      128080  0.0  0.0   2604  1716 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
hasi      128087  0.0  0.0   2604  1864 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
hasi      128094  0.0  0.0   2604  1596 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
hasi      128104  0.0  0.0   2604  1776 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
hasi      128112  0.0  0.0   2604  1772 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
hasi      128119  0.0  0.0   2604  1688 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
hasi      128131  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
hasi      128138  0.0  0.0   2604  1944 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
hasi      128149  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
hasi      128157  0.0  0.0   2604  1804 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
hasi      128164  0.0  0.0   2604  1944 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
hasi      128171  0.0  0.0   2604  1880 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
hasi      128180  0.0  0.0   2604  1716 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
hasi      128187  0.0  0.0   2604  1596 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
hasi      128201  0.0  0.0   2604  1596 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
hasi      128209  0.0  0.0   2740  1904 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
hasi      128216  0.0  0.0   4408  3012 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
root      128217  0.0  0.0      0     0 ?        D    09:56   0:00 [kworker/u64:10+flush-btrfs-1]
hasi      128228  0.0  0.0   2604  1728 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
hasi      128235  0.0  0.0   2604  2044 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
hasi      128246  0.0  0.0   2604  1844 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
root      128250  0.0  0.0      0     0 ?        D    09:57   0:00 [kworker/u64:11+flush-btrfs-1]
hasi      128260  0.0  0.0   4408  2696 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
hasi      128263  0.0  0.0   2604  1688 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
hasi      128277  0.0  0.0   2604  1908 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
root      128282  0.0  0.0      0     0 ?        D    09:58   0:00 [kworker/u64:12+flush-btrfs-1]
hasi      128289  0.0  0.0   2604  1752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
hasi      128303  0.0  0.0   2604  1716 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
hasi      128306  0.0  0.0   4408  2752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
hasi      128315  0.0  0.0   2604  1688 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
hasi      128322  0.0  0.0   2604  1864 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
hasi      128329  0.0  0.0   2604  2072 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
hasi      128338  0.0  0.0   2604  1728 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
root      128345  0.0  0.0      0     0 ?        D    10:00   0:00 [kworker/u64:13+flush-btrfs-1]
hasi      128361  0.0  0.0   2604  2044 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
root      128364  0.0  0.0      0     0 ?        D    10:01   0:00 [kworker/u64:15+flush-btrfs-1]
hasi      128372  0.0  0.0   2604  1772 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
hasi      128381  0.0  0.0   2604  1732 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
hasi      128390  0.0  0.0   4408  2752 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
hasi      128397  0.0  0.0   2604  1700 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
hasi      128400  0.0  0.0   2604  1688 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
hasi      128414  0.0  0.0   2604  1596 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
hasi      128421  0.0  0.0   2604  1728 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
hasi      128433  0.0  0.0   2604  1872 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
hasi      128440  0.0  0.0   2604  1772 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
hasi      128459  0.0  0.0   2604  1700 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
hasi      128466  0.0  0.0   2604  2044 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
hasi      128475  0.0  0.0   4456  2944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
hasi      128485  0.0  0.0   2604  1688 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
hasi      128493  0.0  0.0   2604  1944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
hasi      128500  0.0  0.0   2604  1804 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
root      128504  0.0  0.0      0     0 ?        D    10:05   0:00 [kworker/u64:16+flush-btrfs-1]
hasi      128515  0.0  0.0   2604  1688 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
root      128516  0.0  0.0      0     0 ?        D    10:06   0:00 [kworker/u64:17+flush-btrfs-1]
hasi      128524  0.0  0.0   2604  1916 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
hasi      128533  0.0  0.0   2604  1752 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
hasi      128540  0.0  0.0   2604  1732 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
hasi      128547  0.0  0.0   2604  2072 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
hasi      128560  0.0  0.0   2604  1688 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
hasi      128567  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
root      128572  0.0  0.0      0     0 ?        D    10:07   0:00 [kworker/u64:18+flush-btrfs-1]
hasi      128576  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
hasi      128588  0.0  0.0   2604  2000 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
hasi      128599  0.0  0.0   2604  1900 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
root      128601  0.0  0.0      0     0 ?        D    10:08   0:00 [kworker/u64:19+flush-btrfs-1]
hasi      128612  0.0  0.0   2604  1944 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
hasi      128619  0.0  0.0   2604  1728 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
hasi      128627  0.0  0.0   2604  1596 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
hasi      128634  0.0  0.0   2604  1700 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
hasi      128641  0.0  0.0   2604  1908 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
hasi      128650  0.0  0.0   2604  2016 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
root      128652  0.0  0.0      0     0 ?        D    10:09   0:00 [kworker/u64:20+flush-btrfs-1]
root      128656  0.0  0.0      0     0 ?        D    10:10   0:00 [kworker/u64:21+flush-btrfs-1]
hasi      128667  0.0  0.0   2604  1904 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
root      128670  0.0  0.0      0     0 ?        D    10:11   0:00 [kworker/u64:22+flush-btrfs-1]
hasi      128678  0.0  0.0   2604  1688 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
hasi      128687  0.0  0.0   2604  1732 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
hasi      128694  0.0  0.0   2604  1752 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
hasi      128701  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
hasi      128711  0.0  0.0   2604  1596 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
hasi      128718  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
root      128724  0.0  0.0      0     0 ?        D    10:12   0:00 [kworker/u64:27+flush-btrfs-1]
hasi      128727  0.0  0.0   2604  1864 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
hasi      128741  0.0  0.0   2604  1776 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
hasi      128748  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
root      128749  0.0  0.0      0     0 ?        D    10:13   0:00 [kworker/u64:30+flush-btrfs-1]
hasi      128760  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
hasi      128767  0.0  0.0   2604  1908 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
hasi      128776  0.0  0.0   2604  1864 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
hasi      128786  0.0  0.0   2604  1688 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
hasi      128793  0.0  0.0   2604  1872 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
hasi      128800  0.0  0.0   2604  1772 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
hasi      128817  0.0  0.0   2604  1916 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
hasi      128825  0.0  0.0   2604  2072 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
hasi      128835  0.0  0.0   2604  2000 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
hasi      128843  0.0  0.0   2604  1704 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
hasi      128850  0.0  0.0   2604  1716 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
hasi      128860  0.0  0.0   2780  1904 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
hasi      128869  0.0  0.0   2604  1728 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
root      128871  0.0  0.0      0     0 ?        D    10:17   0:00 [kworker/u64:31+flush-btrfs-1]
hasi      128878  0.0  0.0   2604  1916 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
hasi      128899  0.0  0.0   2604  1880 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
hasi      128907  0.0  0.0   2604  1944 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
hasi      128919  0.0  0.0   2604  1844 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
root      128922  0.0  0.0      0     0 ?        D    10:19   0:00 [kworker/u64:33+flush-btrfs-1]
hasi      128929  0.0  0.0   2604  1752 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
hasi      128937  0.0  0.0   2604  1596 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
hasi      128944  0.0  0.0   2604  1716 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
hasi      128951  0.0  0.0   2604  1804 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
hasi      128958  0.0  0.0   2604  2072 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
root      128965  0.0  0.0      0     0 ?        D    10:20   0:00 [kworker/u64:36+flush-btrfs-1]
hasi      128976  0.0  0.0   2604  1716 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
root      128977  0.0  0.0      0     0 ?        D    10:21   0:00 [kworker/u64:37+flush-btrfs-1]
hasi      128985  0.0  0.0   2604  1772 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
hasi      128994  0.0  0.0   2604  1916 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
hasi      129001  0.0  0.0   2604  1944 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
hasi      129008  0.0  0.0   2604  1916 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
hasi      129018  0.0  0.0   2604  1804 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
hasi      129025  0.0  0.0   2604  1704 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
hasi      129033  0.0  0.0   2604  1728 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
hasi      129045  0.0  0.0   2604  1688 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
hasi      129052  0.0  0.0   2604  1732 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
hasi      129063  0.0  0.0   2604  1872 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
hasi      129070  0.0  0.0   2604  1968 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
root      129074  0.0  0.0      0     0 ?        D    10:24   0:00 [kworker/u64:38+flush-btrfs-1]
hasi      129082  0.0  0.0   2604  1804 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
hasi      129089  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
hasi      129096  0.0  0.0   2604  1716 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
hasi      129103  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
hasi      129117  0.0  0.0   2604  1732 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
hasi      129125  0.0  0.0   2604  2000 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
hasi      129139  0.0  0.0   2604  1776 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
work      129145  0.0  0.0  97816 22024 ?        Dl   10:26   0:00 smbd: client [192.168.67.33]
root      129161  0.0  0.0      0     0 ?        D    10:27   0:00 [kworker/u64:39+flush-btrfs-1]
hasi      129168  0.0  0.0   2604  1716 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
hasi      129175  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
hasi      129185  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
hasi      129192  0.0  0.0   2604  1844 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
hasi      129203  0.0  0.0   2604  1688 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
hasi      129216  0.0  0.0   2604  1688 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
hasi      129223  0.0  0.0   2604  1992 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
hasi      129236  0.0  0.0   2604  2072 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
hasi      129246  0.0  0.0   2604  1596 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
hasi      129255  0.0  0.0   2604  1944 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
hasi      129262  0.0  0.0   2604  1716 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
root      129263  0.0  0.0      0     0 ?        D    10:29   0:00 [kworker/u64:40+flush-btrfs-1]
hasi      129270  0.0  0.0   2604  1700 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
hasi      129277  0.0  0.0   2604  1772 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
hasi      129292  0.0  0.0   2604  1776 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
hasi      129300  0.0  0.0   2604  1732 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
hasi      129309  0.0  0.0   2604  1700 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
hasi      129319  0.0  0.0   2604  1880 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
hasi      129330  0.0  0.0   2604  1864 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
hasi      129341  0.0  0.0   2604  1688 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
hasi      129348  0.0  0.0   2604  1716 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
hasi      129355  0.0  0.0   2604  1772 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
root      129362  0.0  0.0      0     0 ?        D    10:33   0:00 [kworker/u64:41+flush-btrfs-1]
hasi      129369  0.0  0.0   2604  1944 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
hasi      129376  0.0  0.0   2604  1776 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
hasi      129394  0.0  0.0   2604  1752 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
hasi      129401  0.0  0.0   2604  1864 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
hasi      129409  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
hasi      129416  0.0  0.0   2604  1900 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
hasi      129425  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
hasi      129432  0.0  0.0   2604  1728 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
hasi      129450  0.0  0.0   2604  1700 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
hasi      129458  0.0  0.0   2604  1856 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
hasi      129472  0.0  0.0   2604  1752 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
hasi      129479  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
root      129480  0.0  0.0      0     0 ?        D    10:37   0:00 [kworker/u64:43+flush-btrfs-1]
hasi      129487  0.0  0.0   2604  1704 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
hasi      129497  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
hasi      129504  0.0  0.0   2604  1856 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
hasi      129512  0.0  0.0   2604  1700 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
root      129519  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:44+flush-btrfs-1]
hasi      129526  0.0  0.0   2604  2044 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
hasi      129535  0.0  0.0   2604  1752 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
root      129541  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:45+flush-btrfs-1]
hasi      129554  0.0  0.0   2736  1772 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
root      129560  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:46+flush-btrfs-1]
hasi      129567  0.0  0.0   2604  1916 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
hasi      129575  0.0  0.0   2604  1872 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
hasi      129582  0.0  0.0   2604  1688 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
hasi      129589  0.0  0.0   2604  1972 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
hasi      129597  0.0  0.0   2604  1596 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
root      129598  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:47+flush-btrfs-1]
root      129599  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:49+flush-btrfs-1]
hasi      129613  0.0  0.0   2604  1916 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
hasi      129622  0.0  0.0   2604  1728 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
hasi      129631  0.0  0.0   2604  1944 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
hasi      129638  0.0  0.0   2604  1960 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
hasi      129647  0.0  0.0   2604  1916 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
hasi      129657  0.0  0.0   2604  1776 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
hasi      129664  0.0  0.0   2604  1728 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
hasi      129671  0.0  0.0   2604  1716 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
root      129679  0.0  0.0      0     0 ?        D    10:43   0:00 [kworker/u64:50+flush-btrfs-1]
hasi      129686  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
hasi      129693  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
hasi      129704  0.0  0.0   2604  1688 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
hasi      129711  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
hasi      129719  0.0  0.0   2604  1700 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
hasi      129726  0.0  0.0   2604  1944 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
hasi      129733  0.0  0.0   2604  1620 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
hasi      129740  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
hasi      129754  0.0  0.0   2604  1776 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
hasi      129763  0.0  0.0   2604  2000 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
hasi      129776  0.0  0.0   2604  2016 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
hasi      129784  0.0  0.0   2604  1696 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
hasi      129791  0.0  0.0   2604  2000 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
hasi      129803  0.0  0.0   2604  1716 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
hasi      129810  0.0  0.0   2604  1700 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
root      129811  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:51+flush-btrfs-1]
root      129812  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:53+flush-btrfs-1]
hasi      129819  0.0  0.0   2604  1688 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
root      129823  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:54+flush-btrfs-1]
hasi      129836  0.0  0.0   2604  1716 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
hasi      129843  0.0  0.0   2604  1856 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
hasi      129856  0.0  0.0   2604  1992 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
root      129861  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:55+flush-btrfs-1]
root      129862  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:56+flush-btrfs-1]
root      129863  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:58+flush-btrfs-1]
hasi      129870  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
hasi      129878  0.0  0.0   2604  1804 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
hasi      129885  0.0  0.0   2604  1872 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
hasi      129892  0.0  0.0   2604  1904 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
hasi      129902  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
hasi      129917  0.0  0.0   2604  1596 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
hasi      129925  0.0  0.0   2604  1772 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
hasi      129934  0.0  0.0   2604  1944 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
hasi      129941  0.0  0.0   2604  1776 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
hasi      129949  0.0  0.0   2604  1716 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
hasi      129958  0.0  0.0   2604  1596 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
hasi      129965  0.0  0.0   2604  1728 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
hasi      129972  0.0  0.0   2604  1752 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
hasi      129985  0.0  0.0   2604  1772 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
hasi      129992  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
hasi      130004  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
hasi      130011  0.0  0.0   2604  1844 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
hasi      130020  0.0  0.0   2604  1732 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
hasi      130027  0.0  0.0   2604  1596 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
hasi      130034  0.0  0.0   2604  1752 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
hasi      130041  0.0  0.0   2604  1772 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
hasi      130055  0.0  0.0   2604  1700 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
hasi      130063  0.0  0.0   2604  1992 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
hasi      130079  0.0  0.0   2604  1916 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
hasi      130086  0.0  0.0   2604  1596 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
hasi      130097  0.0  0.0   2604  1688 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
hasi      130105  0.0  0.0   2604  1732 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
hasi      130109  0.0  0.0   2604  1752 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
hasi      130114  0.0  0.0   2604  1916 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
root      130127  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/u64:59+flush-btrfs-1]
hasi      130138  0.0  0.0   2604  1716 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
hasi      130145  0.0  0.0   2604  1944 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
hasi      130156  0.0  0.0   2604  1900 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
root      130161  0.0  0.0      0     0 ?        D    10:59   0:00 [kworker/u64:60+flush-btrfs-1]
hasi      130168  0.0  0.0   2604  1872 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
hasi      130176  0.0  0.0   2604  1916 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
hasi      130183  0.0  0.0   2604  1688 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
hasi      130190  0.0  0.0   2604  1832 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
hasi      130199  0.0  0.0   2604  1704 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
root      130206  0.0  0.0      0     0 ?        D    11:00   0:00 [kworker/u64:61+flush-btrfs-1]
hasi      130222  0.0  0.0   2604  1776 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
hasi      130232  0.0  0.0   2604  1728 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
hasi      130239  0.0  0.0   2604  1772 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
hasi      130246  0.0  0.0   2604  1752 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
hasi      130254  0.0  0.0   2604  1716 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
hasi      130263  0.0  0.0   2604  1864 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
hasi      130270  0.0  0.0   2604  1872 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
hasi      130279  0.0  0.0   2604  1700 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
root      130281  0.0  0.0      0     0 ?        D    11:02   0:00 [kworker/u64:62+flush-btrfs-1]
hasi      130291  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
hasi      130298  0.0  0.0   2604  1804 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
hasi      130310  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
hasi      130322  0.0  0.0   2604  1916 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
hasi      130332  0.0  0.0   2604  1864 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
hasi      130339  0.0  0.0   2604  1704 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
hasi      130349  0.0  0.0   2604  1700 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
hasi      130356  0.0  0.0   2604  1716 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
hasi      130370  0.0  0.0   2604  1864 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
hasi      130378  0.0  0.0   2604  1880 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
hasi      130391  0.0  0.0   2604  2016 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
root      130393  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:63+flush-btrfs-1]
root      130394  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:64+flush-btrfs-1]
hasi      130402  0.0  0.0   2604  1704 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
hasi      130410  0.0  0.0   2604  2072 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
hasi      130420  0.0  0.0   2604  1716 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
hasi      130427  0.0  0.0   2604  1700 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
hasi      130436  0.0  0.0   2604  1864 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
hasi      130448  0.0  0.0   2604  1904 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
hasi      130458  0.0  0.0   2604  1844 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
hasi      130470  0.0  0.0   2604  1856 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
root      130475  0.0  0.0      0     0 ?        D    11:09   0:00 [kworker/u64:65+flush-btrfs-1]
hasi      130482  0.0  0.0   2604  1916 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
hasi      130490  0.0  0.0   2604  1688 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
hasi      130497  0.0  0.0   2604  1732 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
hasi      130504  0.0  0.0   2604  1872 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
hasi      130511  0.0  0.0   2604  1944 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
hasi      130525  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
hasi      130533  0.0  0.0   2604  1872 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
hasi      130542  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
hasi      130549  0.0  0.0   2604  1872 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
hasi      130557  0.0  0.0   2604  1804 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
hasi      130566  0.0  0.0   2604  1772 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
hasi      130573  0.0  0.0   2604  1728 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
hasi      130582  0.0  0.0   2604  1716 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
hasi      130593  0.0  0.0   2604  1864 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
hasi      130600  0.0  0.0   2604  1716 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
hasi      130611  0.0  0.0   2604  1776 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
hasi      130618  0.0  0.0   2604  1900 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
hasi      130630  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
hasi      130637  0.0  0.0   2604  1700 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
hasi      130644  0.0  0.0   2604  1596 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
hasi      130651  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
hasi      130668  0.0  0.0   2604  1772 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
hasi      130676  0.0  0.0   2700  1988 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
hasi      130693  0.0  0.0   2604  1716 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
root      130694  0.0  0.0      0     0 ?        D    11:17   0:00 [kworker/u64:66+flush-btrfs-1]
hasi      130701  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
hasi      130709  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
hasi      130718  0.0  0.0   2604  1716 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
hasi      130725  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
hasi      130734  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
hasi      130745  0.0  0.0   2604  1732 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
hasi      130752  0.0  0.0   2604  2072 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
root      130767  0.0  0.0      0     0 ?        D    11:18   0:00 [kworker/u64:67+flush-btrfs-1]
hasi      130775  0.0  0.0   2604  1904 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
root      130781  0.0  0.0      0     0 ?        D    11:19   0:00 [kworker/u64:69+flush-btrfs-1]
hasi      130788  0.0  0.0   2604  1716 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
hasi      130796  0.0  0.0   2604  1864 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
hasi      130803  0.0  0.0   2604  1776 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
hasi      130810  0.0  0.0   2604  1944 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
hasi      130817  0.0  0.0   2604  1916 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
hasi      130831  0.0  0.0   2604  1704 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
hasi      130841  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
hasi      130848  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
hasi      130855  0.0  0.0   2604  1828 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
hasi      130870  0.0  0.0   2604  1872 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
root      130871  0.0  0.0      0     0 ?        D    11:22   0:00 [kworker/u64:70+flush-btrfs-1]
hasi      130880  0.0  0.0   2604  1688 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
hasi      130887  0.0  0.0   2604  1776 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
hasi      130896  0.0  0.0   2604  1804 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
hasi      130907  0.0  0.0   2604  1752 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
hasi      130914  0.0  0.0   2604  1728 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
root      130915  0.0  0.0      0     0 ?        D    11:23   0:00 [kworker/u64:72+flush-btrfs-1]
hasi      130926  0.0  0.0   2604  1732 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
hasi      130933  0.0  0.0   2604  1596 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
hasi      130944  0.0  0.0   4408  2808 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
hasi      130947  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
hasi      130955  0.0  0.0   2604  1728 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
hasi      130962  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
hasi      130977  0.0  0.0   2604  1804 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
hasi      130985  0.0  0.0   2604  1900 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
hasi      130995  0.0  0.0   2604  1704 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
hasi      131024  0.0  0.0   2604  1716 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
hasi      131031  0.0  0.0   2604  1804 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
hasi      131040  0.0  0.0   2676  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
hasi      131048  0.0  0.0   2604  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
root      131064  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:73+flush-btrfs-1]
hasi      131065  0.0  0.0   2604  1688 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
root      131067  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:74+flush-btrfs-1]
hasi      131077  0.0  0.0   2604  2044 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
hasi      131085  0.0  0.0   2604  1804 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
hasi      131096  0.0  0.0   2604  1752 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
hasi      131103  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
hasi      131112  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
hasi      131119  0.0  0.0   2604  1804 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
hasi      131126  0.0  0.0   2604  1704 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
hasi      131133  0.0  0.0   2676  2000 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
hasi      131150  0.0  0.0   2604  1752 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
hasi      131158  0.0  0.0   2604  1872 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
hasi      131167  0.0  0.0   2604  1596 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
hasi      131178  0.0  0.0   2604  1844 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
root      131184  0.0  0.0      0     0 ?        D    11:32   0:00 [kworker/u64:75+flush-btrfs-1]
hasi      131191  0.0  0.0   2604  1944 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
hasi      131200  0.0  0.0   2604  1596 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
hasi      131207  0.0  0.0   2604  1916 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
hasi      131216  0.0  0.0   2604  1732 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
hasi      131227  0.0  0.0   2604  1728 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
hasi      131234  0.0  0.0   2604  1864 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
hasi      131250  0.0  0.0   2604  1716 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
root      131251  0.0  0.0      0     0 ?        D    11:34   0:00 [kworker/u64:76+flush-btrfs-1]
hasi      131258  0.0  0.0   2604  2004 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
hasi      131267  0.0  0.0   2604  1916 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
hasi      131274  0.0  0.0   2604  1772 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
hasi      131281  0.0  0.0   2604  1728 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
hasi      131288  0.0  0.0   2604  1872 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
hasi      131301  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
hasi      131309  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
hasi      131318  0.0  0.0   2604  1872 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
hasi      131326  0.0  0.0   2604  1944 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
hasi      131333  0.0  0.0   2676  1844 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
hasi      131344  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
hasi      131351  0.0  0.0   2604  1880 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
root      131357  0.0  0.0      0     0 ?        D    11:37   0:00 [kworker/u64:77+flush-btrfs-1]
hasi      131367  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
root      131372  0.0  0.0      0     0 ?        D    11:38   0:00 [kworker/u64:78+flush-btrfs-1]
hasi      131379  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
hasi      131386  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
hasi      131397  0.0  0.0   2604  1804 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
hasi      131404  0.0  0.0   2604  1716 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
hasi      131412  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
hasi      131419  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
hasi      131426  0.0  0.0   2604  1900 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
hasi      131435  0.0  0.0   2604  1700 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
hasi      131456  0.0  0.0   2604  1904 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
hasi      131464  0.0  0.0   4408  2968 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
root      131465  0.0  0.0      0     0 ?        D    11:41   0:00 [kworker/u64:81+flush-btrfs-1]
hasi      131476  0.0  0.0   2604  1700 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
hasi      131484  0.0  0.0   2604  2000 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
hasi      131495  0.0  0.0   2604  1728 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
hasi      131504  0.0  0.0   2604  1804 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
hasi      131511  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
hasi      131520  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
root      131522  0.0  0.0      0     0 ?        D    11:42   0:00 [kworker/u64:82+flush-btrfs-1]
hasi      131532  0.0  0.0   2604  1944 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
hasi      131539  0.0  0.0   2604  1716 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
hasi      131550  0.0  0.0   2604  1704 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
hasi      131557  0.0  0.0   2604  1776 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
hasi      131565  0.0  0.0   2604  1864 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
hasi      131572  0.0  0.0   2604  1688 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
hasi      131579  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
hasi      131586  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
hasi      131600  0.0  0.0   2604  1864 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
hasi      131608  0.0  0.0   2604  1688 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
hasi      131617  0.0  0.0   2604  2000 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
hasi      131630  0.0  0.0   2604  1944 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
hasi      131637  0.0  0.0   2604  1700 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
hasi      131646  0.0  0.0   2604  2000 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
root      131654  0.0  0.0      0     0 ?        D    11:47   0:00 [kworker/u64:84+flush-btrfs-1]
hasi      131655  0.0  0.0   2604  1904 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
hasi      131667  0.0  0.0   2604  1776 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
root      131674  0.0  0.0      0     0 ?        D    11:48   0:00 [kworker/u64:86+flush-btrfs-1]
hasi      131681  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
hasi      131688  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
hasi      131700  0.0  0.0   2604  1776 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
hasi      131707  0.0  0.0   2604  1596 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
hasi      131715  0.0  0.0   2604  1716 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
hasi      131722  0.0  0.0   2604  1864 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
hasi      131729  0.0  0.0   2604  2072 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
hasi      131741  0.0  0.0   2604  1732 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
hasi      131757  0.0  0.0   2604  1844 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
root      131761  0.0  0.0      0     0 ?        D    11:51   0:00 [kworker/u64:87+flush-btrfs-1]
hasi      131769  0.0  0.0   2604  1716 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
hasi      131778  0.0  0.0   2604  1752 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
root      131779  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:89+flush-btrfs-1]
hasi      131787  0.0  0.0   2604  1944 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
hasi      131794  0.0  0.0   2604  1728 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
hasi      131803  0.0  0.0   2604  1716 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
hasi      131810  0.0  0.0   2604  1596 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
hasi      131819  0.0  0.0   2604  1864 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
root      131822  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:90+flush-btrfs-1]
hasi      131831  0.0  0.0   2604  1804 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
hasi      131838  0.0  0.0   2604  1752 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
hasi      131849  0.0  0.0   2604  1596 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
hasi      131856  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
hasi      131864  0.0  0.0   2604  1728 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
hasi      131871  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
hasi      131878  0.0  0.0   2604  1944 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
hasi      131885  0.0  0.0   2604  1732 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
hasi      131900  0.0  0.0   2604  1728 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
hasi      131908  0.0  0.0   2604  1864 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
hasi      131917  0.0  0.0   2676  1828 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
hasi      131930  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
hasi      131937  0.0  0.0   2604  1772 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
hasi      131946  0.0  0.0   2604  2016 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
hasi      131956  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
hasi      131965  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
hasi      131998  0.0  0.0   2604  1716 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
hasi      132005  0.0  0.0   2604  1872 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
hasi      132016  0.0  0.0   2604  1772 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
hasi      132023  0.0  0.0   2604  1908 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
hasi      132031  0.0  0.0   2604  1728 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
hasi      132038  0.0  0.0   2604  1804 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
hasi      132045  0.0  0.0   2604  1844 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
hasi      132057  0.0  0.0   2604  1716 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
root      132058  0.0  0.0      0     0 ?        D    11:59   0:00 [kworker/u64:92+flush-btrfs-1]
hasi      132080  0.0  0.0   2604  1908 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
hasi      132093  0.0  0.0   2604  1916 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
root      132094  0.0  0.0      0     0 ?        D    12:01   0:00 [kworker/u64:93+flush-btrfs-1]
hasi      132101  0.0  0.0   2604  1944 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
root      132103  0.0  0.0      0     0 ?        D    12:02   0:00 [kworker/u64:94+flush-btrfs-1]
hasi      132110  0.0  0.0   2604  1716 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
hasi      132117  0.0  0.0   2604  1704 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
hasi      132126  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
hasi      132133  0.0  0.0   2604  1944 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
hasi      132142  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
hasi      132153  0.0  0.0   2604  1772 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
hasi      132160  0.0  0.0   2604  1916 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
hasi      132172  0.0  0.0   2604  1716 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
hasi      132184  0.0  0.0   2604  1728 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
hasi      132192  0.0  0.0   2604  1712 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
hasi      132201  0.0  0.0   2604  1864 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
hasi      132208  0.0  0.0   2604  1596 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
hasi      132215  0.0  0.0   2604  1944 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
hasi      132229  0.0  0.0   2604  1804 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
hasi      132237  0.0  0.0   2604  1728 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
hasi      132246  0.0  0.0   2604  1932 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
hasi      132261  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
hasi      132268  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
hasi      132277  0.0  0.0   2676  1816 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
hasi      132285  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
root      132286  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:96+flush-btrfs-1]
hasi      132295  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
root      132296  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:97+flush-btrfs-1]
hasi      132307  0.0  0.0   2604  1728 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
hasi      132314  0.0  0.0   2604  1732 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
hasi      132325  0.0  0.0   2604  1932 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
hasi      132334  0.0  0.0   2604  1716 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
hasi      132342  0.0  0.0   2604  1596 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
hasi      132349  0.0  0.0   2604  1804 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
hasi      132356  0.0  0.0   2604  1844 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
hasi      132368  0.0  0.0   2604  1932 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
root      132369  0.0  0.0      0     0 ?        D    12:09   0:00 [kworker/u64:98+flush-btrfs-1]
hasi      132383  0.0  0.0   2604  1772 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
root      132384  0.0  0.0      0     0 ?        D    12:11   0:00 [kworker/u64:99+flush-btrfs-1]
hasi      132392  0.0  0.0   2604  1688 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
hasi      132401  0.0  0.0   2604  1728 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
hasi      132409  0.0  0.0   2604  1864 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
hasi      132416  0.0  0.0   2604  1596 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
hasi      132425  0.0  0.0   2604  1752 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
hasi      132432  0.0  0.0   2604  1700 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
hasi      132441  0.0  0.0   2604  1704 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
hasi      132452  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
hasi      132459  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
hasi      132470  0.0  0.0   2604  1944 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
hasi      132477  0.0  0.0   2604  1752 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
hasi      132485  0.0  0.0   2604  1944 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
hasi      132492  0.0  0.0   2604  1776 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
hasi      132499  0.0  0.0   2604  1864 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
hasi      132506  0.0  0.0   2604  1772 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
hasi      132523  0.0  0.0   2604  1772 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
hasi      132531  0.0  0.0   2604  1904 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
hasi      132542  0.0  0.0   2604  2000 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
root      132556  0.0  0.0      0     0 ?        D    12:17   0:00 [kworker/u64:100+flush-btrfs-1]
hasi      132557  0.0  0.0   2604  1872 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
hasi      132564  0.0  0.0   2604  1728 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
hasi      132578  0.0  0.0   2604  1704 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
hasi      132585  0.0  0.0   2604  1916 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
hasi      132594  0.0  0.0   2604  1864 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
hasi      132608  0.0  0.0   2604  1816 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
hasi      132617  0.0  0.0   2604  1772 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
root      132618  0.0  0.0      0     0 ?        D    12:18   0:00 [kworker/u64:101+flush-btrfs-1]
hasi      132630  0.0  0.0   2604  1932 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
hasi      132640  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
hasi      132648  0.0  0.0   2604  1752 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
hasi      132655  0.0  0.0   2604  1916 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
hasi      132662  0.0  0.0   2604  1844 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
hasi      132672  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
hasi      132686  0.0  0.0   2604  1960 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
hasi      132697  0.0  0.0   2604  1776 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
hasi      132704  0.0  0.0   2604  1752 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
root      132705  0.0  0.0      0     0 ?        D    12:22   0:00 [kworker/u64:103+flush-btrfs-1]
hasi      132713  0.0  0.0   2604  1872 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
hasi      132720  0.0  0.0   2604  1716 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
hasi      132729  0.0  0.0   2604  1700 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
hasi      132736  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
hasi      132745  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
hasi      132756  0.0  0.0   2604  1596 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
hasi      132763  0.0  0.0   2604  1916 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
hasi      132774  0.0  0.0   2604  1688 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
hasi      132781  0.0  0.0   2604  1700 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
hasi      132789  0.0  0.0   2604  2016 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
hasi      132800  0.0  0.0   2604  1732 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
hasi      132807  0.0  0.0   2604  1728 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
hasi      132814  0.0  0.0   2604  1804 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
root      132815  0.0  0.0      0     0 ?        D    12:24   0:00 [kworker/u64:104+flush-btrfs-1]
hasi      132829  0.0  0.0   2604  1704 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
hasi      132839  0.0  0.0   2740  2000 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
hasi      132847  0.0  0.0   2604  1960 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
root      132851  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:105+flush-btrfs-1]
root      132852  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:107+flush-btrfs-1]
hasi      132860  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
hasi      132867  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
hasi      132876  0.0  0.0   2604  1944 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
hasi      132883  0.0  0.0   2604  1856 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
hasi      132893  0.0  0.0   2604  1596 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
hasi      132925  0.0  0.0   2604  1932 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
hasi      132935  0.0  0.0   2604  1716 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
hasi      132946  0.0  0.0   2604  2044 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
root      132948  0.0  0.0      0     0 ?        D    12:29   0:00 [kworker/u64:109+flush-btrfs-1]
hasi      132955  0.0  0.0   2604  1716 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
hasi      132963  0.0  0.0   2604  1732 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
hasi      132970  0.0  0.0   2604  1596 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
hasi      132977  0.0  0.0   2604  1804 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
hasi      132985  0.0  0.0   2604  2016 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
hasi      133001  0.0  0.0   2604  1944 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
hasi      133011  0.0  0.0   2604  1916 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
hasi      133018  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
root      133022  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:110+flush-btrfs-1]
hasi      133030  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
hasi      133037  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
root      133038  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:111+flush-btrfs-1]
hasi      133047  0.0  0.0   2604  1728 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
hasi      133054  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
root      133055  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:112+flush-btrfs-1]
hasi      133064  0.0  0.0   2604  1700 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
hasi      133075  0.0  0.0   2604  1772 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
hasi      133082  0.0  0.0   2604  1596 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
hasi      133098  0.0  0.0   2604  1596 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133106  0.0  0.0   2604  2016 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133115  0.0  0.0   2604  1944 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133124  0.0  0.0   2604  1732 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133131  0.0  0.0   2604  1716 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133138  0.0  0.0   2604  1772 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
hasi      133157  0.0  0.0   2604  1864 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
hasi      133167  0.0  0.0   2604  1844 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
hasi      133175  0.0  0.0   2604  1900 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133188  0.0  0.0   2604  1732 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133195  0.0  0.0   2604  1824 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133204  0.0  0.0   2604  1704 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133211  0.0  0.0   2604  1944 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133221  0.0  0.0   2604  2044 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
hasi      133234  0.0  0.0   2604  1716 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
hasi      133241  0.0  0.0   2604  1972 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
root      133243  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:113+flush-btrfs-1]
root      133249  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:115+flush-btrfs-1]
hasi      133262  0.0  0.0   2604  1872 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
hasi      133269  0.0  0.0   2604  1704 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
hasi      133277  0.0  0.0   2604  1804 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
hasi      133284  0.0  0.0   2604  1716 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
hasi      133291  0.0  0.0   2604  1900 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
hasi      133299  0.0  0.0   2604  1904 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
root      133310  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:116+flush-btrfs-1]
hasi      133318  0.0  0.0   2604  2016 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
root      133322  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:117+flush-btrfs-1]
hasi      133332  0.0  0.0   2604  1716 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
hasi      133339  0.0  0.0   2604  1688 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
hasi      133348  0.0  0.0   2604  1900 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
hasi      133357  0.0  0.0   2604  1772 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
hasi      133367  0.0  0.0   2604  1864 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
hasi      133374  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
hasi      133383  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
root      133388  0.0  0.0      0     0 ?        D    12:42   0:00 [kworker/u64:118+flush-btrfs-1]
root      133389  0.0  0.0      0     0 ?        D    12:43   0:00 [kworker/u64:120+flush-btrfs-1]
hasi      133396  0.0  0.0   2604  1728 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
hasi      133403  0.0  0.0   2604  1596 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
hasi      133414  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133421  0.0  0.0   2604  1864 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133429  0.0  0.0   2604  1872 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133436  0.0  0.0   2604  1596 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133443  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133450  0.0  0.0   2604  1688 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
hasi      133464  0.0  0.0   2604  1916 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
hasi      133474  0.0  0.0   2604  1732 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
hasi      133481  0.0  0.0   2604  1972 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
hasi      133501  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
hasi      133504  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
root      133505  0.0  0.0      0     0 ?        D    12:47   0:00 [kworker/u64:121+flush-btrfs-1]
hasi      133514  0.0  0.0   2604  1688 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
hasi      133521  0.0  0.0   2604  1932 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
hasi      133533  0.0  0.0   2604  1752 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
hasi      133547  0.0  0.0   2604  1776 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
hasi      133554  0.0  0.0   2604  1908 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
hasi      133568  0.0  0.0   2604  1944 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
hasi      133575  0.0  0.0   2604  1864 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
root      133576  0.0  0.0      0     0 ?        D    12:49   0:00 [kworker/u64:122+flush-btrfs-1]
hasi      133584  0.0  0.0   2604  1916 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
hasi      133591  0.0  0.0   2604  1596 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
hasi      133598  0.0  0.0   2604  2016 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
hasi      133606  0.0  0.0   2604  2000 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
hasi      133624  0.0  0.0   2604  1704 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
hasi      133634  0.0  0.0   2604  1716 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
hasi      133641  0.0  0.0   2604  1704 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
hasi      133652  0.0  0.0   2604  1696 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
hasi      133656  0.0  0.0   2604  1904 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
hasi      133667  0.0  0.0   2604  1872 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
hasi      133674  0.0  0.0   2604  1944 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
root      133675  0.0  0.0      0     0 ?        D    12:52   0:00 [kworker/u64:123+flush-btrfs-1]
hasi      133684  0.0  0.0   2604  1772 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
hasi      133695  0.0  0.0   2604  1716 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
hasi      133702  0.0  0.0   2604  1596 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
hasi      133713  0.0  0.0   2604  1704 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
hasi      133721  0.0  0.0   2604  1688 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
root      133722  0.0  0.0      0     0 ?        D    12:54   0:00 [kworker/u64:125+flush-btrfs-1]
hasi      133730  0.0  0.0   2604  1804 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
hasi      133737  0.0  0.0   2736  1908 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
hasi      133746  0.0  0.0   2604  1732 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
hasi      133753  0.0  0.0   2604  1716 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
root      133757  0.0  0.0      0     0 ?        D    12:55   0:00 [kworker/u64:126+flush-btrfs-1]
hasi      133769  0.0  0.0   2604  1688 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
hasi      133779  0.0  0.0   2604  1596 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
hasi      133786  0.0  0.0   2604  1880 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
hasi      133796  0.0  0.0   2604  1716 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
hasi      133803  0.0  0.0   2604  1872 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
hasi      133812  0.0  0.0   2604  1916 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
hasi      133819  0.0  0.0   2604  1900 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
hasi      133830  0.0  0.0   2604  1772 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
root      133835  0.0  0.0      0     0 ?        D    12:57   0:00 [kworker/u64:127+flush-btrfs-1]
hasi      133842  0.0  0.0   2776  1952 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
hasi      133851  0.0  0.0   2604  1872 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
hasi      133863  0.0  0.0   2604  2072 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
hasi      133874  0.0  0.0   2604  1916 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
hasi      133882  0.0  0.0   2604  1804 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
hasi      133889  0.0  0.0   2604  1732 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
hasi      133896  0.0  0.0   2604  1752 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
hasi      133903  0.0  0.0   2604  1700 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
root      133917  0.0  0.0      0     0 ?        D    13:01   0:00 [kworker/u64:128+flush-btrfs-1]
hasi      133925  0.0  0.0   2604  1752 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
hasi      133935  0.0  0.0   2604  1728 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
hasi      133942  0.0  0.0   2604  1688 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
hasi      133950  0.0  0.0   2604  1716 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
hasi      133957  0.0  0.0   2604  2016 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
hasi      133970  0.0  0.0   2604  1700 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
hasi      133977  0.0  0.0   2604  1916 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
hasi      133986  0.0  0.0   2604  1804 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
root      133991  0.0  0.0      0     0 ?        D    13:02   0:00 [kworker/u64:129+flush-btrfs-1]
hasi      133998  0.0  0.0   2604  1776 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
hasi      134005  0.0  0.0   2604  1704 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
hasi      134017  0.0  0.0   2604  1776 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
hasi      134027  0.0  0.0   4408  2808 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
hasi      134030  0.0  0.0   2604  1704 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
hasi      134047  0.0  0.0   2604  1700 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
hasi      134058  0.0  0.0   2604  1916 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
hasi      134065  0.0  0.0   2604  1752 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
root      134073  0.0  0.0      0     0 ?        D    13:06   0:00 [kworker/u64:130+flush-btrfs-1]
hasi      134081  0.0  0.0   2604  1916 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
hasi      134091  0.0  0.0   2604  1688 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
hasi      134098  0.0  0.0   2604  1704 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
hasi      134106  0.0  0.0   2604  1864 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
hasi      134113  0.0  0.0   2604  1776 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
hasi      134122  0.0  0.0   2604  1728 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
hasi      134129  0.0  0.0   2604  2000 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
root      134132  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:131+flush-btrfs-1]
hasi      134142  0.0  0.0   2604  1700 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
root      134147  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:135+flush-btrfs-1]
hasi      134154  0.0  0.0   2604  1972 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
hasi      134162  0.0  0.0   2604  1932 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
hasi      134180  0.0  0.0   2604  1772 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134187  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134199  0.0  0.0   2604  1872 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134202  0.0  0.0   2604  1732 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134209  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134216  0.0  0.0   2604  1864 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
hasi      134230  0.0  0.0   2604  1804 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
hasi      134240  0.0  0.0   2604  1704 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
hasi      134247  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134255  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134262  0.0  0.0   2604  1904 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134273  0.0  0.0   2604  1704 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134282  0.0  0.0   2604  1872 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134289  0.0  0.0   2604  1728 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
hasi      134300  0.0  0.0   2604  1804 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
hasi      134307  0.0  0.0   2604  1944 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
hasi      134318  0.0  0.0   2604  1732 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
hasi      134325  0.0  0.0   2604  1596 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
hasi      134338  0.0  0.0   2604  1688 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
hasi      134340  0.0  0.0   2604  2016 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
hasi      134356  0.0  0.0   2604  1872 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
hasi      134363  0.0  0.0   2604  1916 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
root      134367  0.0  0.0      0     0 ?        D    13:15   0:00 [kworker/u64:136+flush-btrfs-1]
hasi      134381  0.0  0.0   2604  1732 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
hasi      134391  0.0  0.0   2604  1772 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
hasi      134398  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134406  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134413  0.0  0.0   2604  1688 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134422  0.0  0.0   2604  1916 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134431  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134438  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
hasi      134449  0.0  0.0   2604  1704 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
hasi      134461  0.0  0.0   2604  1900 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
hasi      134486  0.0  0.0   2604  1872 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
root      134487  0.0  0.0      0     0 ?        D    13:19   0:00 [kworker/u64:138+flush-btrfs-1]
hasi      134494  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
hasi      134502  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
hasi      134509  0.0  0.0   2604  1864 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
hasi      134516  0.0  0.0   2604  1752 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
hasi      134523  0.0  0.0   2604  1688 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
hasi      134537  0.0  0.0   2604  1772 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
hasi      134547  0.0  0.0   2604  1944 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
hasi      134554  0.0  0.0   2604  1732 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134562  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134569  0.0  0.0   2604  1596 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134578  0.0  0.0   2604  1772 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134587  0.0  0.0   2604  1916 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134594  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
hasi      134605  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
hasi      134612  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
hasi      134623  0.0  0.0   2604  1864 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
hasi      134630  0.0  0.0   2604  1716 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
hasi      134638  0.0  0.0   2604  1732 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
hasi      134645  0.0  0.0   2604  1900 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
hasi      134653  0.0  0.0   2604  1804 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
hasi      134660  0.0  0.0   2604  1688 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
root      134675  0.0  0.0      0     0 ?        D    13:26   0:00 [kworker/u64:139+flush-btrfs-1]
hasi      134683  0.0  0.0   2604  1732 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
hasi      134693  0.0  0.0   2604  1700 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
hasi      134700  0.0  0.0   2604  1716 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
hasi      134708  0.0  0.0   2604  1688 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
hasi      134715  0.0  0.0   2604  1864 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
hasi      134724  0.0  0.0   2604  1596 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
hasi      134733  0.0  0.0   2604  1776 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
hasi      134740  0.0  0.0   2604  1728 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
root      134745  0.0  0.0      0     0 ?        D    13:28   0:00 [kworker/u64:142+flush-btrfs-1]
hasi      134752  0.0  0.0   2604  1688 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
hasi      134759  0.0  0.0   2604  1904 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
hasi      134771  0.0  0.0   2604  1932 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
hasi      134784  0.0  0.0   2604  1772 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
hasi      134792  0.0  0.0   2604  1916 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
hasi      134799  0.0  0.0   2604  1700 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
hasi      134806  0.0  0.0   2604  1716 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
hasi      134813  0.0  0.0   2604  1596 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
root      134814  0.0  0.0      0     0 ?        D    13:29   0:00 [kworker/u64:143+flush-btrfs-1]
hasi      134828  0.0  0.0   2604  1776 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
hasi      134838  0.0  0.0   2604  1916 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
hasi      134848  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
hasi      134856  0.0  0.0   2604  1804 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
hasi      134863  0.0  0.0   2604  1872 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
hasi      134872  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
hasi      134881  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
hasi      134888  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
root      134893  0.0  0.0      0     0 ?        D    13:33   0:00 [kworker/u64:145+flush-btrfs-1]
hasi      134900  0.0  0.0   2604  1872 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
hasi      134911  0.0  0.0   2604  1700 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
hasi      134923  0.0  0.0   2604  1728 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134930  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134938  0.0  0.0   2604  1880 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134946  0.0  0.0   2604  1732 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134953  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134960  0.0  0.0   2604  1900 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
hasi      134981  0.0  0.0   2604  1716 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
hasi      134991  0.0  0.0   2604  1804 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
hasi      134999  0.0  0.0   2604  1864 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
hasi      135007  0.0  0.0   2604  1944 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
hasi      135014  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
hasi      135023  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
hasi      135032  0.0  0.0   2604  1596 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
hasi      135039  0.0  0.0   2604  1700 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
root      135065  0.0  0.0      0     0 ?        D    13:38   0:00 [kworker/u64:146+flush-btrfs-1]
hasi      135072  0.0  0.0   2604  1596 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
hasi      135085  0.0  0.0   2604  1916 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
hasi      135097  0.0  0.0   2604  1972 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
hasi      135105  0.0  0.0   2604  1716 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
hasi      135113  0.0  0.0   2604  1700 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
hasi      135120  0.0  0.0   2604  1900 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
hasi      135133  0.0  0.0   4308  2940 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
hasi      135140  0.0  0.0   2604  1960 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
root      135161  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:148+flush-btrfs-1]
hasi      135169  0.0  0.0   2604  1728 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
root      135170  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:149+flush-btrfs-1]
hasi      135183  0.0  0.0   2604  1732 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
hasi      135191  0.0  0.0   2604  1704 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
hasi      135199  0.0  0.0   2604  1804 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
hasi      135206  0.0  0.0   2604  1700 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
hasi      135215  0.0  0.0   2604  1752 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
hasi      135224  0.0  0.0   2604  1864 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
hasi      135231  0.0  0.0   2604  1688 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
root      135234  0.0  0.0      0     0 ?        D    13:43   0:00 [kworker/u64:150+flush-btrfs-1]
hasi      135241  0.0  0.0   2604  1688 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
hasi      135250  0.0  0.0   2604  1732 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
hasi      135258  0.0  0.0   2604  1776 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135265  0.0  0.0   2604  1716 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135273  0.0  0.0   2604  1596 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135280  0.0  0.0   2604  1732 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135287  0.0  0.0   2604  1916 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135294  0.0  0.0   2604  1864 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
hasi      135308  0.0  0.0   2604  1732 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
hasi      135316  0.0  0.0   2604  1596 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
hasi      135323  0.0  0.0   2604  1716 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135337  0.0  0.0   2604  1932 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135345  0.0  0.0   2604  1944 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135355  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135364  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135371  0.0  0.0   2604  1864 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
hasi      135385  0.0  0.0   2604  1704 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
hasi      135395  0.0  0.0   2604  1752 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
hasi      135403  0.0  0.0   2604  1752 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135410  0.0  0.0   2604  1864 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135418  0.0  0.0   2604  1700 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135425  0.0  0.0   2604  1944 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135432  0.0  0.0   2676  1844 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135439  0.0  0.0   2604  1776 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
hasi      135456  0.0  0.0   2604  1816 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
hasi      135468  0.0  0.0   2604  1716 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
hasi      135475  0.0  0.0   2604  1916 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135483  0.0  0.0   2604  1776 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135490  0.0  0.0   2604  2072 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135500  0.0  0.0   2604  1872 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135509  0.0  0.0   2604  1864 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135517  0.0  0.0   2604  1716 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
hasi      135527  0.0  0.0   2604  1704 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
hasi      135542  0.0  0.0   2604  1596 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
hasi      135551  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135559  0.0  0.0   2604  1732 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135567  0.0  0.0   2604  1916 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135574  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135582  0.0  0.0   2604  1728 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135589  0.0  0.0   2604  1772 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
hasi      135602  0.0  0.0   2604  1700 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
hasi      135610  0.0  0.0   2604  1992 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
root      135614  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:151+flush-btrfs-1]
hasi      135619  0.0  0.0   2604  1688 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
hasi      135627  0.0  0.0   2604  2000 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
hasi      135637  0.0  0.0   2604  1596 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
hasi      135647  0.0  0.0   2604  1992 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
root      135652  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:152+flush-btrfs-1]
hasi      135659  0.0  0.0   2604  2016 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
hasi      135667  0.0  0.0   2604  1900 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
hasi      135679  0.0  0.0   2604  1972 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
hasi      135688  0.0  0.0   2604  1916 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
root      135689  0.0  0.0      0     0 ?        D    13:59   0:00 [kworker/u64:153+flush-btrfs-1]
hasi      135697  0.0  0.0   2604  1992 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
hasi      135707  0.0  0.0   2604  1804 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
hasi      135716  0.0  0.0   2604  1716 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
hasi      135723  0.0  0.0   2604  1880 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
hasi      135731  0.0  0.0   2604  1772 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
hasi      135738  0.0  0.0   2604  1916 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
root      135754  0.0  0.0      0     0 ?        D    14:01   0:00 [kworker/u64:154+flush-btrfs-1]
hasi      135763  0.0  0.0   2604  1944 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
hasi      135771  0.0  0.0   2604  1716 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
hasi      135778  0.0  0.0   2604  1776 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135789  0.0  0.0   2604  1872 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135796  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135805  0.0  0.0   2604  1716 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135814  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135821  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
hasi      135831  0.0  0.0   2604  1944 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
hasi      135846  0.0  0.0   2604  1960 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
hasi      135857  0.0  0.0   2604  1704 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135864  0.0  0.0   2604  1904 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135878  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135885  0.0  0.0   2604  1716 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135892  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135899  0.0  0.0   2604  1596 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
hasi      135910  0.0  0.0   2604  1688 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
hasi      135918  0.0  0.0   2604  1728 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
hasi      135926  0.0  0.0   2604  1732 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
hasi      135935  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
root      135936  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:155+flush-btrfs-1]
hasi      135943  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
hasi      135952  0.0  0.0   2604  1728 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
hasi      135961  0.0  0.0   2604  1864 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
hasi      135968  0.0  0.0   2604  2044 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
root      135969  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:156+flush-btrfs-1]
hasi      136004  0.0  0.0   2604  1908 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
root      136011  0.0  0.0      0     0 ?        D    14:08   0:00 [kworker/u64:157+flush-btrfs-1]
hasi      136019  0.0  0.0   2604  1732 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
hasi      136027  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136035  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136043  0.0  0.0   2604  1864 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136050  0.0  0.0   2604  1772 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136057  0.0  0.0   2604  1776 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136064  0.0  0.0   2604  1944 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
hasi      136075  0.0  0.0   2604  1772 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
hasi      136084  0.0  0.0   2604  1944 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
hasi      136092  0.0  0.0   2604  1700 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136100  0.0  0.0   2604  1944 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136107  0.0  0.0   2604  1752 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136116  0.0  0.0   2604  1716 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136125  0.0  0.0   2604  1804 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136132  0.0  0.0   2604  1772 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
hasi      136143  0.0  0.0   2604  1940 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
hasi      136153  0.0  0.0   2604  1864 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
hasi      136162  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136170  0.0  0.0   2604  1716 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136178  0.0  0.0   2604  2000 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136187  0.0  0.0   2604  1688 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136194  0.0  0.0   2604  1972 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136207  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
hasi      136223  0.0  0.0   2604  1916 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
hasi      136231  0.0  0.0   2604  1776 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
root      136232  0.0  0.0      0     0 ?        D    14:17   0:00 [kworker/u64:159+flush-btrfs-1]
hasi      136239  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136247  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136254  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136263  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136272  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136279  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
hasi      136290  0.0  0.0   2604  1596 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
hasi      136299  0.0  0.0   2604  1716 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
hasi      136312  0.0  0.0   2604  1860 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136323  0.0  0.0   2604  1716 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136331  0.0  0.0   2604  1700 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136338  0.0  0.0   2604  1900 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136352  0.0  0.0   2604  1704 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136359  0.0  0.0   2604  1728 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
hasi      136375  0.0  0.0   2604  1716 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
hasi      136384  0.0  0.0   2604  1752 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
hasi      136391  0.0  0.0   2604  1772 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
hasi      136399  0.0  0.0   2604  1700 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
hasi      136406  0.0  0.0   2604  1732 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
hasi      136415  0.0  0.0   2604  1716 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
hasi      136424  0.0  0.0   2604  1804 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
hasi      136431  0.0  0.0   2604  1728 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
root      136434  0.0  0.0      0     0 ?        D    14:23   0:00 [kworker/u64:160+flush-btrfs-1]
hasi      136442  0.0  0.0   2604  1704 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
hasi      136450  0.0  0.0   2604  1716 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
hasi      136458  0.0  0.0   2604  1872 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136465  0.0  0.0   2604  1752 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136473  0.0  0.0   2604  1864 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136480  0.0  0.0   2604  1700 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136487  0.0  0.0   2604  1908 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136495  0.0  0.0   2604  1944 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
hasi      136507  0.0  0.0   2604  1688 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
hasi      136516  0.0  0.0   2604  1908 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
hasi      136526  0.0  0.0   2604  2044 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136537  0.0  0.0   2604  1880 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136545  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136554  0.0  0.0   2604  1960 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136569  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136577  0.0  0.0   2604  1716 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
hasi      136587  0.0  0.0   2604  1864 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
hasi      136596  0.0  0.0   2604  1700 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
hasi      136604  0.0  0.0   2604  1864 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
hasi      136611  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
hasi      136619  0.0  0.0   2604  1728 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
hasi      136626  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
hasi      136633  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
hasi      136640  0.0  0.0   2604  1904 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
root      136644  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:161+flush-btrfs-1]
root      136646  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:162+flush-btrfs-1]
hasi      136657  0.0  0.0   2604  2016 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
hasi      136668  0.0  0.0   2604  1704 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
hasi      136678  0.0  0.0   2604  1712 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136689  0.0  0.0   2604  1804 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136692  0.0  0.0   2604  1972 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136703  0.0  0.0   2604  1752 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136712  0.0  0.0   2604  1688 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136719  0.0  0.0   2604  1772 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
hasi      136729  0.0  0.0   2604  1776 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
hasi      136740  0.0  0.0   2604  1972 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
hasi      136751  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136758  0.0  0.0   2604  1804 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136766  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136773  0.0  0.0   2604  1872 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136780  0.0  0.0   2604  1860 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136787  0.0  0.0   2604  1688 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
hasi      136799  0.0  0.0   2604  1772 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
hasi      136807  0.0  0.0   2604  1596 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
hasi      136815  0.0  0.0   2604  1972 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
hasi      136827  0.0  0.0   2604  1900 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
hasi      136836  0.0  0.0   2604  1916 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
hasi      136845  0.0  0.0   2604  1716 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
hasi      136855  0.0  0.0   2604  1776 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
hasi      136862  0.0  0.0   2604  1772 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
root      136866  0.0  0.0      0     0 ?        D    14:38   0:00 [kworker/u64:163+flush-btrfs-1]
hasi      136874  0.0  0.0   2604  1900 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
hasi      136906  0.0  0.0   2604  1776 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
hasi      136914  0.0  0.0   2604  1732 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
root      136915  0.0  0.0      0     0 ?        D    14:39   0:00 [kworker/u64:164+flush-btrfs-1]
hasi      136922  0.0  0.0   2604  1752 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
hasi      136930  0.0  0.0   2604  1688 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
hasi      136941  0.0  0.0   2604  1596 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
hasi      136948  0.0  0.0   2604  1916 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
hasi      136955  0.0  0.0   2604  1700 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
hasi      136975  0.0  0.0   2604  1728 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
hasi      136984  0.0  0.0   2604  1688 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
hasi      136993  0.0  0.0   2604  1700 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
root      136994  0.0  0.0      0     0 ?        D    14:42   0:00 [kworker/u64:165+flush-btrfs-1]
hasi      137001  0.0  0.0   2604  1872 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
hasi      137008  0.0  0.0   2604  1972 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
hasi      137019  0.0  0.0   2604  1716 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
hasi      137028  0.0  0.0   2604  1864 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
hasi      137035  0.0  0.0   2604  1704 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
hasi      137045  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
hasi      137053  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
hasi      137061  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
hasi      137071  0.0  0.0   2604  1688 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
hasi      137079  0.0  0.0   2604  1900 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
root      137081  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:166+flush-btrfs-1]
root      137082  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:167+flush-btrfs-1]
hasi      137089  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
hasi      137096  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
hasi      137103  0.0  0.0   2604  1700 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
hasi      137115  0.0  0.0   2604  1772 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
hasi      137123  0.0  0.0   2604  1904 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
hasi      137133  0.0  0.0   2604  1772 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137140  0.0  0.0   2604  1596 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137147  0.0  0.0   2604  1732 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137156  0.0  0.0   2604  1704 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137166  0.0  0.0   2604  1700 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137175  0.0  0.0   2604  1776 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
hasi      137188  0.0  0.0   2604  1804 ?        Ds   14:48   0:00 /usr/lib/ssh/sftp-server
hasi      137200  0.0  0.0   2604  2044 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137207  0.0  0.0   2604  1944 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137214  0.0  0.0   2604  1732 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137222  0.0  0.0   2604  1872 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137229  0.0  0.0   2604  1960 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137239  0.0  0.0   2604  1716 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137246  0.0  0.0   2604  1688 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
hasi      137257  0.0  0.0   2604  1704 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
root      137258  0.0  0.0      0     0 ?        D    14:51   0:00 [kworker/u64:168+flush-btrfs-1]
hasi      137266  0.0  0.0   2604  1864 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
hasi      137274  0.0  0.0   2604  1700 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137281  0.0  0.0   2604  1688 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137288  0.0  0.0   2604  1716 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137297  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137307  0.0  0.0   2604  1804 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137314  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
hasi      137325  0.0  0.0   2604  1716 ?        Ds   14:53   0:00 /usr/lib/ssh/sftp-server
hasi      137335  0.0  0.0   2604  1900 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137344  0.0  0.0   2604  1772 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137351  0.0  0.0   2604  1944 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137359  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137366  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137373  0.0  0.0   2604  1960 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137381  0.0  0.0   2604  1872 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
hasi      137398  0.0  0.0   2604  1732 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
hasi      137407  0.0  0.0   2604  1716 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
hasi      137415  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137422  0.0  0.0   2604  1772 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137429  0.0  0.0   2604  1704 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137438  0.0  0.0   2604  1776 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137448  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137455  0.0  0.0   2604  1732 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
hasi      137465  0.0  0.0   2604  1752 ?        Ds   14:58   0:00 /usr/lib/ssh/sftp-server
hasi      137479  0.0  0.0   2604  1752 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137486  0.0  0.0   2604  1804 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137493  0.0  0.0   2604  1864 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137501  0.0  0.0   2604  1916 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137508  0.0  0.0   2604  1772 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137515  0.0  0.0   2604  2072 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
hasi      137530  0.0  0.0   2604  1716 ?        Ds   15:00   0:00 /usr/lib/ssh/sftp-server
root      137531  0.0  0.0      0     0 ?        D    15:00   0:00 [kworker/u64:171+flush-btrfs-1]
hasi      137547  0.0  0.0   2604  1864 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
hasi      137557  0.0  0.0   2604  1804 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
hasi      137566  0.0  0.0   2604  1716 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
hasi      137573  0.0  0.0   2604  1772 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
hasi      137580  0.0  0.0   2604  1732 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
hasi      137589  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
hasi      137598  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
hasi      137605  0.0  0.0   2604  1700 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
root      137609  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:172+flush-btrfs-1]
root      137611  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:173+flush-btrfs-1]
hasi      137625  0.0  0.0   2604  1816 ?        Ds   15:03   0:00 /usr/lib/ssh/sftp-server
hasi      137640  0.0  0.0   2604  1688 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137647  0.0  0.0   2604  2044 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137656  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137664  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137671  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137678  0.0  0.0   2604  1916 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
hasi      137688  0.0  0.0   2604  1772 ?        Ds   15:05   0:00 /usr/lib/ssh/sftp-server
hasi      137699  0.0  0.0   2604  1704 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
hasi      137709  0.0  0.0   2604  1728 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
hasi      137717  0.0  0.0   2604  1776 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137724  0.0  0.0   2604  2016 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137732  0.0  0.0   2604  1752 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137739  0.0  0.0   2604  1916 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137748  0.0  0.0   2604  1864 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137755  0.0  0.0   2604  1704 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
hasi      137766  0.0  0.0   2604  1688 ?        Ds   15:08   0:00 /usr/lib/ssh/sftp-server
hasi      137778  0.0  0.0   2604  1704 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137785  0.0  0.0   2604  1772 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137792  0.0  0.0   2604  1700 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137800  0.0  0.0   2604  1596 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137807  0.0  0.0   2604  1804 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137816  0.0  0.0   2604  2044 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
hasi      137826  0.0  0.0   2604  1776 ?        Ds   15:10   0:00 /usr/lib/ssh/sftp-server
hasi      137844  0.0  0.0   2604  1816 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
hasi      137856  0.0  0.0   2604  1844 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
hasi      137865  0.0  0.0   2604  1732 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137872  0.0  0.0   2604  1772 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137879  0.0  0.0   2604  1752 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137886  0.0  0.0   2604  1700 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137895  0.0  0.0   2604  1596 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137902  0.0  0.0   2604  1944 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
hasi      137914  0.0  0.0   2604  1752 ?        Ds   15:13   0:00 /usr/lib/ssh/sftp-server
hasi      137925  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137932  0.0  0.0   2604  1700 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137939  0.0  0.0   2604  1704 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137947  0.0  0.0   2604  1960 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137956  0.0  0.0   2604  1732 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137963  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
hasi      137977  0.0  0.0   2604  1716 ?        Ds   15:15   0:00 /usr/lib/ssh/sftp-server
hasi      137992  0.0  0.0   2604  1716 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
hasi      138002  0.0  0.0   2604  1872 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
hasi      138011  0.0  0.0   2604  1716 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138018  0.0  0.0   2604  1872 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138025  0.0  0.0   2604  1804 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138046  0.0  0.0   2604  1880 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138057  0.0  0.0   2604  1988 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138065  0.0  0.0   2604  1844 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
hasi      138088  0.0  0.0   2604  1864 ?        Ds   15:18   0:00 /usr/lib/ssh/sftp-server
hasi      138099  0.0  0.0   2604  1752 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138106  0.0  0.0   2604  1916 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138113  0.0  0.0   2604  1704 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138121  0.0  0.0   2604  1872 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138132  0.0  0.0   2604  1776 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138139  0.0  0.0   2604  1688 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
hasi      138147  0.0  0.0   2604  1732 ?        Ds   15:20   0:00 /usr/lib/ssh/sftp-server
hasi      138162  0.0  0.0   2604  1880 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
hasi      138173  0.0  0.0   2604  1944 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
hasi      138182  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138189  0.0  0.0   2604  1716 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138196  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138203  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138212  0.0  0.0   2604  1872 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138219  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
hasi      138229  0.0  0.0   2604  1716 ?        Ds   15:23   0:00 /usr/lib/ssh/sftp-server
hasi      138243  0.0  0.0   2604  1944 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138250  0.0  0.0   2604  1728 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138257  0.0  0.0   2604  1900 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138267  0.0  0.0   2604  1904 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138276  0.0  0.0   2604  1688 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138283  0.0  0.0   2604  1700 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
hasi      138293  0.0  0.0   2604  1816 ?        Ds   15:25   0:00 /usr/lib/ssh/sftp-server
hasi      138305  0.0  0.0   2604  1596 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
hasi      138315  0.0  0.0   2604  1688 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
hasi      138326  0.0  0.0   2604  1728 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138333  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138340  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138347  0.0  0.0   2604  1596 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138356  0.0  0.0   2604  1776 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138363  0.0  0.0   2604  1688 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
hasi      138373  0.0  0.0   2604  1716 ?        Ds   15:28   0:00 /usr/lib/ssh/sftp-server
hasi      138384  0.0  0.0   2604  1688 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138391  0.0  0.0   2604  1596 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138398  0.0  0.0   2604  1728 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138406  0.0  0.0   2604  1716 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138413  0.0  0.0   2604  1872 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138423  0.0  0.0   2604  1916 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
hasi      138433  0.0  0.0   2604  1804 ?        Ds   15:30   0:00 /usr/lib/ssh/sftp-server
hasi      138455  0.0  0.0   2604  1804 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
hasi      138470  0.0  0.0   2604  2072 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
root      138974  0.0  0.0   3936  2080 pts/2    S+   16:32   0:00 grep         D

[-- Attachment #6: uname.txt --]
[-- Type: text/plain, Size: 130 bytes --]

[root@coldnas ~]# uname -a
Linux coldnas 6.10.5-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 x86_64 GNU/Linux

[-- Attachment #7: Type: text/plain, Size: 47011 bytes --]



> Am 08.08.2024 um 16:23 schrieb John Stoffel <john@stoffel.org>:
> 
>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
> 
>> Hi,
>>> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
>>> 
>>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>>> 
>>> 
>>> 
>>>> i had some more time at hand and managent to compile 5.15.164. The
>>>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>>>> reproduce this on a newer supported kernel if I can.
>>> 
>>> Supported by who?   NixOS?  Why don't you just install linux kernel
>>> 6.6.x and see of the problem is still there?  5.15.x is ancient and
>>> un-supported upstream now.  
> 
>> I did just that. However, 5.15 “un-supported” by upstream is
>> confusing me. It’s an official LTS kernel with an EOL of December
>> 2026.
> 
> To quote the kernel.org:
> 
>     Longterm
> 
>     There are usually several "longterm maintenance" kernel releases
>     provided for the purposes of backporting bugfixes for older kernel
>     trees. Only important bugfixes are applied to such kernels and they
>     don't usually see very frequent releases, especially for older trees.
> 
> So when we run into people having problems with LTS kernels, the first
> thing we ask is for them to run the most recent kernels, because
> that's where the bug fixing happens.  
> 
> In any case, there have been some bugs in recent RAID5/RAID6 setups,
> so going to a recent kernel will help track these down. 
> 
> 
>> Also, I’d like to note that NixOS kernels tend to be very close to
>> upstream. The only patches that I can see are involved here are
>> those that patch out some hard coded references to user space paths:
> 
> Not in this case, kernel 5.15 is ancient, you should be running 6.9.x
> or even newer for debugging issues like this. 
> 
>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch
> 
>> Kernel is now:
> 
>> Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux
> 
> 
>> The issue is still there on 6.10.3 and now looks like shown below.
> 
> Great!  Thanks for doing this change, this will let the developers
> help you more easily. 
> 
>> I’m aware that this is output that shows symptoms and not
>> (necessarily) the cause. I’m currently a bit out of ideas where to
>> look for more information and would appreciate any pointers. My
>> suspicion is an interaction problem triggered by the use of NVMe in
>> combination with other systems (xfs, dm-crypt and raid are the ones
>> I’m aware of playign a role).
> 
>> The use of NVMe itself likely isn’t the issue (we’ve been using NVMe
>> on similar hosts and also in combination with dm-crypt with this
>> kernel for a while now) and I could imagine that it triggers a race
>> condition due to the higher performance - although the specific
>> performance parameters aren't *that* high. Right before the lockup I
>> see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe
>> with dm-crypt but not with raid before.
> 
>> I can perform debugging on that machine as needed, but googling for
>> any combination of hung tasks related to nvme/xfs/crypt/raid only
>> ends up showing me generic performance concerns from forum, an
>> unrelated xfs issue mentioned by redhat and the list archive entry
>> from this post.
> 
> Can you try setting up some loop devices in the same type of
> configuration, and seeing if you can replicate the issue that way?
> Let's try to get the nvme stuff out of the way to see if this can be
> replicated more easily.  
> 
> 
>> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
>> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
>> [ 7497.040979] Call Trace:
>> [ 7497.040981]  <TASK>
>> [ 7497.040987]  __schedule+0x3fa/0x1550
>> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
>> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
>> [ 7497.041168]  schedule+0x27/0xf0
>> [ 7497.041171]  io_schedule+0x46/0x70
>> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
>> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
>> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
>> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
>> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
>> [ 7497.041207]  evict+0x1b0/0x1d0
>> [ 7497.041212]  __dentry_kill+0x6e/0x170
>> [ 7497.041216]  dput+0xe5/0x1b0
>> [ 7497.041218]  do_renameat2+0x386/0x600
>> [ 7497.041226]  __x64_sys_rename+0x43/0x50
>> [ 7497.041229]  do_syscall_64+0xb7/0x200
>> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>> [ 7497.041236] RIP: 0033:0x7f4be586f75b
>> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
>> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
>> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
>> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
>> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
>> [ 7497.041277]  </TASK>
>> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
>> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
>> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.063140] Call Trace:
>> [ 7497.063141]  <TASK>
>> [ 7497.063145]  __schedule+0x3fa/0x1550
>> [ 7497.063154]  schedule+0x27/0xf0
>> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.063184]  ? bio_split_rw+0x193/0x260
>> [ 7497.063190]  md_handle_request+0x153/0x270
>> [ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.063198]  __submit_bio+0x190/0x240
>> [ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.063214]  process_one_work+0x18f/0x3b0
>> [ 7497.063219]  worker_thread+0x233/0x340
>> [ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.063225]  kthread+0xcd/0x100
>> [ 7497.063228]  ? __pfx_kthread+0x10/0x10
>> [ 7497.063230]  ret_from_fork+0x31/0x50
>> [ 7497.063234]  ? __pfx_kthread+0x10/0x10
>> [ 7497.063236]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.063243]  </TASK>
>> [ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
>> [ 7497.071367]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
>> [ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.085086] Call Trace:
>> [ 7497.085087]  <TASK>
>> [ 7497.085089]  __schedule+0x3fa/0x1550
>> [ 7497.085094]  schedule+0x27/0xf0
>> [ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.085116]  ? bio_split_rw+0x193/0x260
>> [ 7497.085120]  md_handle_request+0x153/0x270
>> [ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.085125]  __submit_bio+0x190/0x240
>> [ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.085138]  process_one_work+0x18f/0x3b0
>> [ 7497.085142]  worker_thread+0x233/0x340
>> [ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.085150]  kthread+0xcd/0x100
>> [ 7497.085152]  ? __pfx_kthread+0x10/0x10
>> [ 7497.085155]  ret_from_fork+0x31/0x50
>> [ 7497.085157]  ? __pfx_kthread+0x10/0x10
>> [ 7497.085159]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.085164]  </TASK>
>> [ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
>> [ 7497.093282]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
>> [ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.106998] Call Trace:
>> [ 7497.106999]  <TASK>
>> [ 7497.107001]  __schedule+0x3fa/0x1550
>> [ 7497.107006]  schedule+0x27/0xf0
>> [ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.107028]  ? bio_split_rw+0x193/0x260
>> [ 7497.107033]  md_handle_request+0x153/0x270
>> [ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.107039]  __submit_bio+0x190/0x240
>> [ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.107052]  process_one_work+0x18f/0x3b0
>> [ 7497.107055]  worker_thread+0x233/0x340
>> [ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.107063]  kthread+0xcd/0x100
>> [ 7497.107065]  ? __pfx_kthread+0x10/0x10
>> [ 7497.107067]  ret_from_fork+0x31/0x50
>> [ 7497.107069]  ? __pfx_kthread+0x10/0x10
>> [ 7497.107071]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.107081]  </TASK>
>> [ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
>> [ 7497.114327]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
>> [ 7497.128024] Call Trace:
>> [ 7497.128025]  <TASK>
>> [ 7497.128027]  __schedule+0x3fa/0x1550
>> [ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.128034]  schedule+0x27/0xf0
>> [ 7497.128036]  schedule_timeout+0x15d/0x170
>> [ 7497.128040]  __down_common+0x119/0x220
>> [ 7497.128045]  down+0x47/0x60
>> [ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
>> [ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
>> [ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
>> [ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
>> [ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
>> [ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>> [ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
>> [ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
>> [ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
>> [ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
>> [ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
>> [ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
>> [ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
>> [ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
>> [ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
>> [ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
>> [ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.129003]  ? get_cached_acl+0x4c/0x90
>> [ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
>> [ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.129065]  path_openat+0xf82/0x1240
>> [ 7497.129072]  do_filp_open+0xc4/0x170
>> [ 7497.129084]  do_sys_openat2+0xab/0xe0
>> [ 7497.129090]  __x64_sys_openat+0x57/0xa0
>> [ 7497.129093]  do_syscall_64+0xb7/0x200
>> [ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>> [ 7497.129099] RIP: 0033:0x7f6809d2be2f
>> [ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
>> [ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
>> [ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
>> [ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
>> [ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
>> [ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
>> [ 7497.129133]  </TASK>
>> [ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
>> [ 7497.137277]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
>> [ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
>> [ 7497.150993] Call Trace:
>> [ 7497.150995]  <TASK>
>> [ 7497.150998]  __schedule+0x3fa/0x1550
>> [ 7497.151007]  schedule+0x27/0xf0
>> [ 7497.151009]  schedule_timeout+0x15d/0x170
>> [ 7497.151013]  __wait_for_common+0x90/0x1c0
>> [ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
>> [ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
>> [ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
>> [ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
>> [ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>> [ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>> [ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>> [ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>> [ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
>> [ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
>> [ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
>> [ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
>> [ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>> [ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
>> [ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
>> [ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
>> [ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
>> [ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
>> [ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
>> [ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
>> [ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
>> [ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
>> [ 7497.152228]  iomap_writepages+0x271/0x9b0
>> [ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
>> [ 7497.152287]  do_writepages+0x76/0x260
>> [ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
>> [ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.152300]  ? psi_group_change+0x213/0x3c0
>> [ 7497.152305]  __writeback_single_inode+0x3d/0x350
>> [ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
>> [ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
>> [ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.152328]  wb_writeback+0x193/0x310
>> [ 7497.152332]  wb_workfn+0x357/0x450
>> [ 7497.152337]  process_one_work+0x18f/0x3b0
>> [ 7497.152342]  worker_thread+0x233/0x340
>> [ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.152348]  kthread+0xcd/0x100
>> [ 7497.152352]  ? __pfx_kthread+0x10/0x10
>> [ 7497.152354]  ret_from_fork+0x31/0x50
>> [ 7497.152358]  ? __pfx_kthread+0x10/0x10
>> [ 7497.152360]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.152366]  </TASK>
>> [ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
>> [ 7497.160489]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
>> [ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.174200] Call Trace:
>> [ 7497.174201]  <TASK>
>> [ 7497.174203]  __schedule+0x3fa/0x1550
>> [ 7497.174208]  schedule+0x27/0xf0
>> [ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.174235]  ? bio_split_rw+0x193/0x260
>> [ 7497.174242]  md_handle_request+0x153/0x270
>> [ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.174248]  __submit_bio+0x190/0x240
>> [ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.174263]  process_one_work+0x18f/0x3b0
>> [ 7497.174266]  worker_thread+0x233/0x340
>> [ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.174271]  kthread+0xcd/0x100
>> [ 7497.174273]  ? __pfx_kthread+0x10/0x10
>> [ 7497.174276]  ret_from_fork+0x31/0x50
>> [ 7497.174277]  ? __pfx_kthread+0x10/0x10
>> [ 7497.174279]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.174285]  </TASK>
>> [ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
>> [ 7497.182499]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
>> [ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
>> [ 7497.196281] Call Trace:
>> [ 7497.196282]  <TASK>
>> [ 7497.196285]  __schedule+0x3fa/0x1550
>> [ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.196293]  schedule+0x27/0xf0
>> [ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
>> [ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
>> [ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
>> [ 7497.196400]  xlog_write+0x310/0x470 [xfs]
>> [ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
>> [ 7497.196503]  process_one_work+0x18f/0x3b0
>> [ 7497.196507]  worker_thread+0x233/0x340
>> [ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.196515]  kthread+0xcd/0x100
>> [ 7497.196517]  ? __pfx_kthread+0x10/0x10
>> [ 7497.196519]  ret_from_fork+0x31/0x50
>> [ 7497.196522]  ? __pfx_kthread+0x10/0x10
>> [ 7497.196524]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.196529]  </TASK>
>> [ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
>> [ 7497.204648]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
>> [ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.218359] Call Trace:
>> [ 7497.218360]  <TASK>
>> [ 7497.218363]  __schedule+0x3fa/0x1550
>> [ 7497.218369]  schedule+0x27/0xf0
>> [ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.218392]  ? bio_split_rw+0x193/0x260
>> [ 7497.218398]  md_handle_request+0x153/0x270
>> [ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.218405]  __submit_bio+0x190/0x240
>> [ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.218419]  process_one_work+0x18f/0x3b0
>> [ 7497.218423]  worker_thread+0x233/0x340
>> [ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.218428]  kthread+0xcd/0x100
>> [ 7497.218430]  ? __pfx_kthread+0x10/0x10
>> [ 7497.218433]  ret_from_fork+0x31/0x50
>> [ 7497.218435]  ? __pfx_kthread+0x10/0x10
>> [ 7497.218437]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.218442]  </TASK>
>> [ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
>> [ 7497.226572]       Not tainted 6.10.3 #1-NixOS
>> [ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
>> [ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>> [ 7497.240287] Call Trace:
>> [ 7497.240288]  <TASK>
>> [ 7497.240290]  __schedule+0x3fa/0x1550
>> [ 7497.240298]  schedule+0x27/0xf0
>> [ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
>> [ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
>> [ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
>> [ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
>> [ 7497.240322]  ? bio_split_rw+0x193/0x260
>> [ 7497.240328]  md_handle_request+0x153/0x270
>> [ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.240334]  __submit_bio+0x190/0x240
>> [ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
>> [ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
>> [ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>> [ 7497.240348]  process_one_work+0x18f/0x3b0
>> [ 7497.240353]  worker_thread+0x233/0x340
>> [ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
>> [ 7497.240358]  kthread+0xcd/0x100
>> [ 7497.240361]  ? __pfx_kthread+0x10/0x10
>> [ 7497.240364]  ret_from_fork+0x31/0x50
>> [ 7497.240366]  ? __pfx_kthread+0x10/0x10
>> [ 7497.240368]  ret_from_fork_asm+0x1a/0x30
>> [ 7497.240375]  </TASK>
>> [ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> 
>>> 
>>> 
>>> 
>>>> Kernel:
>>> 
>>>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
>>> 
>>>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
>>> 
>>>> Output:
>>> 
>>>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>>>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>>>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4549.860435] Call Trace:
>>>> [ 4549.860437]  <TASK>
>>>> [ 4549.860440]  __schedule+0x373/0x1580
>>>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>>>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>>>> [ 4549.860453]  schedule+0x5b/0xe0
>>>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4549.860459]  ? finish_wait+0x90/0x90
>>>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>>>> [ 4549.860490]  ? finish_wait+0x90/0x90
>>>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>>>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>>>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>>>> [ 4549.860502]  __submit_bio+0x18c/0x220
>>>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4549.860517]  process_one_work+0x1d3/0x360
>>>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>>>> [ 4549.860523]  ? process_one_work+0x360/0x360
>>>> [ 4549.860525]  kthread+0x115/0x140
>>>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>>>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>>>> [ 4549.860535]  </TASK>
>>>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>>>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>>>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4549.882368] Call Trace:
>>>> [ 4549.882369]  <TASK>
>>>> [ 4549.882370]  __schedule+0x373/0x1580
>>>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4549.882379]  schedule+0x5b/0xe0
>>>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4549.882384]  ? finish_wait+0x90/0x90
>>>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>>>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.882403]  ? finish_wait+0x90/0x90
>>>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>>>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>>>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>>>> [ 4549.882415]  __submit_bio+0x18c/0x220
>>>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4549.882428]  process_one_work+0x1d3/0x360
>>>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>>>> [ 4549.882433]  ? process_one_work+0x360/0x360
>>>> [ 4549.882435]  kthread+0x115/0x140
>>>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>>>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>>>> [ 4549.882442]  </TASK>
>>>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>>>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>>>> [ 4549.904411] Call Trace:
>>>> [ 4549.904412]  <TASK>
>>>> [ 4549.904414]  __schedule+0x373/0x1580
>>>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>>>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>>>> [ 4549.904498]  schedule+0x5b/0xe0
>>>> [ 4549.904500]  io_schedule+0x42/0x70
>>>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>>>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>>>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>>>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>>>> [ 4549.904520]  evict+0x15f/0x180
>>>> [ 4549.904524]  __dentry_kill+0xde/0x170
>>>> [ 4549.904527]  dput+0x139/0x320
>>>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>>>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>>>> [ 4549.904538]  do_syscall_64+0x34/0x80
>>>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>>>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>>>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>>>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>>>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>>>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>>>> [ 4549.904555]  </TASK>
>>>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>>>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>>>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4549.926380] Call Trace:
>>>> [ 4549.926381]  <TASK>
>>>> [ 4549.926383]  __schedule+0x373/0x1580
>>>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4549.926392]  schedule+0x5b/0xe0
>>>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4549.926397]  ? finish_wait+0x90/0x90
>>>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>>>> [ 4549.926413]  ? finish_wait+0x90/0x90
>>>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>>>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>>>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>>>> [ 4549.926424]  __submit_bio+0x18c/0x220
>>>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4549.926437]  process_one_work+0x1d3/0x360
>>>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>>>> [ 4549.926442]  ? process_one_work+0x360/0x360
>>>> [ 4549.926444]  kthread+0x115/0x140
>>>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>>>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>>>> [ 4549.926454]  </TASK>
>>>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>>>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>>>> [ 4549.947503] Call Trace:
>>>> [ 4549.947505]  <TASK>
>>>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>>>> [ 4549.947510]  __schedule+0x373/0x1580
>>>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>>>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>>>> [ 4549.947521]  schedule+0x5b/0xe0
>>>> [ 4549.947523]  schedule_timeout+0xff/0x130
>>>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>>>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>>>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>>>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>>>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>>>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>>>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>>>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>>>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>>>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>>>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>>>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>>>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>>>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>>>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>>>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>>>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>>>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>>>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>>>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>>>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>>>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>>>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>>>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>>>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>>>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>>>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>>>> [ 4549.948357]  do_syscall_64+0x34/0x80
>>>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>>>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>>>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>>>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>>>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>>>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>>>> [ 4549.948373]  </TASK>
>>>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>>>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>>>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4549.970205] Call Trace:
>>>> [ 4549.970206]  <TASK>
>>>> [ 4549.970209]  __schedule+0x373/0x1580
>>>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.970215]  schedule+0x5b/0xe0
>>>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4549.970219]  ? finish_wait+0x90/0x90
>>>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>>>> [ 4549.970244]  ? finish_wait+0x90/0x90
>>>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>>>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>>>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>>>> [ 4549.970254]  __submit_bio+0x18c/0x220
>>>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4549.970267]  process_one_work+0x1d3/0x360
>>>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>>>> [ 4549.970272]  ? process_one_work+0x360/0x360
>>>> [ 4549.970274]  kthread+0x115/0x140
>>>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>>>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>>>> [ 4549.970282]  </TASK>
>>>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>>>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>>>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4549.992097] Call Trace:
>>>> [ 4549.992098]  <TASK>
>>>> [ 4549.992100]  __schedule+0x373/0x1580
>>>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4549.992109]  schedule+0x5b/0xe0
>>>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4549.992114]  ? finish_wait+0x90/0x90
>>>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>>>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>>>> [ 4549.992135]  ? finish_wait+0x90/0x90
>>>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>>>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>>>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>>>> [ 4549.992144]  __submit_bio+0x18c/0x220
>>>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4549.992157]  process_one_work+0x1d3/0x360
>>>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>>>> [ 4549.992162]  ? process_one_work+0x360/0x360
>>>> [ 4549.992163]  kthread+0x115/0x140
>>>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>>>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>>>> [ 4549.992172]  </TASK>
>>>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>>>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>>>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4550.013992] Call Trace:
>>>> [ 4550.013993]  <TASK>
>>>> [ 4550.013995]  __schedule+0x373/0x1580
>>>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4550.014003]  schedule+0x5b/0xe0
>>>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4550.014008]  ? finish_wait+0x90/0x90
>>>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>>>> [ 4550.014022]  ? finish_wait+0x90/0x90
>>>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>>>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>>>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>>>> [ 4550.014032]  __submit_bio+0x18c/0x220
>>>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4550.014044]  process_one_work+0x1d3/0x360
>>>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>>>> [ 4550.014049]  ? process_one_work+0x360/0x360
>>>> [ 4550.014050]  kthread+0x115/0x140
>>>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>>>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>>>> [ 4550.014058]  </TASK>
>>>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>>>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>>>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4550.035887] Call Trace:
>>>> [ 4550.035888]  <TASK>
>>>> [ 4550.035890]  __schedule+0x373/0x1580
>>>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4550.035899]  schedule+0x5b/0xe0
>>>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4550.035904]  ? finish_wait+0x90/0x90
>>>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>>>> [ 4550.035919]  ? finish_wait+0x90/0x90
>>>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>>>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>>>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>>>> [ 4550.035929]  __submit_bio+0x18c/0x220
>>>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4550.035942]  process_one_work+0x1d3/0x360
>>>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>>>> [ 4550.035948]  ? process_one_work+0x360/0x360
>>>> [ 4550.035949]  kthread+0x115/0x140
>>>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>>>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>>>> [ 4550.035957]  </TASK>
>>>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>>>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>>>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>>>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>> [ 4550.057794] Call Trace:
>>>> [ 4550.057796]  <TASK>
>>>> [ 4550.057798]  __schedule+0x373/0x1580
>>>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>> [ 4550.057806]  schedule+0x5b/0xe0
>>>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>>>> [ 4550.057810]  ? finish_wait+0x90/0x90
>>>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>>>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>>>> [ 4550.057824]  ? finish_wait+0x90/0x90
>>>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>>>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>>>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>>>> [ 4550.057834]  __submit_bio+0x18c/0x220
>>>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>>>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>>>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>> [ 4550.057846]  process_one_work+0x1d3/0x360
>>>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>>>> [ 4550.057850]  ? process_one_work+0x360/0x360
>>>> [ 4550.057852]  kthread+0x115/0x140
>>>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>>>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>>>> [ 4550.057860]  </TASK>
>>> 
>>> 
>>>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>>> 
>>>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> Sure,
>>>>>> 
>>>>>> would you prefer me testing on 5.15.x or something else?
>>>>>> 
>>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>>>> Kernel:
>>>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>>> 
>>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>>> kernel release first.
>>>>>> 
>>>>>> Thanks,
>>>>>> Kuai
>>>>>> 
>>>>>>>> See the kernel config attached.
>>>>>> 
>>>>>> 
>>>>>> Liebe Grüße,
>>>>>> Christian Theune
>>>>>> 
>>>>>> -- 
>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>> 
>>>>>> 
>>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> 
>>> 
> 
>> Liebe Grüße,
>> Christian Theune
> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-19 19:12               ` tihmstar
@ 2024-08-19 21:05                 ` John Stoffel
  2024-08-24 16:56                   ` tihmstar
  2024-08-24 18:12                   ` Dragan Milivojević
  0 siblings, 2 replies; 88+ messages in thread
From: John Stoffel @ 2024-08-19 21:05 UTC (permalink / raw)
  To: tihmstar
  Cc: John Stoffel, Christian Theune, Yu Kuai,
	linux-raid@vger.kernel.org, dm-devel, yukuai (C)

>>>>> "tihmstar" == tihmstar  <tihmstar@gmail.com> writes:

> Hi,

> i think i have the same problem (looks very similar at least).  I
> updated my kernel yesterday, but before that (a few minor versions
> earlier) i'm pretty sure i've seen a very similar stacktrace with
> "md_bitmap_startwrite".

This almost smells like an MD RAID5/6 bitmap problem, have you tried
the patch that was posted in the thread to turn off bitmaps?  

> The setup seems to be similar, with some slight differences.

> I'm also running a linux RAID6 with LUKS, however i'm running it on SATA HDD, not NVME.

Shouldn't be a difference, except in terms of speed I would think.

> Also i have the LUKS layer on each hdd individually, then the RAID6
is build ontop of the /dev/ma pper/-devices of those hdds.

Interesting.  Why this way?  It would seem you now have to enter N
passwords on bootup, instead of just one.  

> I.e. i have LUKS below the RAID, while Christian has it on top of the RAID.

> Directly on the raid i have btrfs (as "single disk", raid is handled
> by linux-raid).

Goot, btrfs RAID6 is known to have problems in a big way. 

> The "trigger" is similar too, i have one large NAS with 100TB data, which i'm trying to migrate to my new NAS.
> "rclone copy --links --local-zero-size-links -P --transfers=16 --sftp-ask-password ....."

> After running that for ~30hours (10GB network link), IO on the new NAS is completely stuck.

> I attached a few files with info about my setup, which i think might be useful.
> I'm happy to help debugging, but i too have data on the new NAS, so rebuilding the RAID isn't an option for me either.

> Cheers
> tihmstar

> PS: second try sending this (never used mailing lists before)

> [root@coldnas ~]# dmesg 
> [56065.390298] md/raid:md127: read error corrected (8 sectors at 8952913016 on dm-1)
> [56068.949917] ata25.00: exception Emask 0x0 SAct 0x79ffffe1 SErr 0x0 action 0x0
> [56068.957061] ata25.00: irq_stat 0x40000008
> [56068.961099] ata25.00: failed command: READ FPDMA QUEUED
> [56068.966335] ata25.00: cmd 60/40:d8:00:9d:a5/05:00:15:02:00/40 tag 27 ncq dma 688128 in
>                         res 43/40:40:c0:9d:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56068.982442] ata25.00: status: { DRDY SENSE ERR }
> [56068.987078] ata25.00: error: { UNC }
> [56069.083162] ata25.00: configured for UDMA/133
> [56069.083228] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56069.083232] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
> [56069.083234] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
> [56069.083236] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d 00 00 00 05 40 00 00
> [56069.083238] critical medium error, dev sdc, sector 8953109760 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56069.093669] ata25: EH complete
> [56071.267547] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56071.274689] ata25.00: irq_stat 0x40000008
> [56071.278714] ata25.00: failed command: READ FPDMA QUEUED
> [56071.283952] ata25.00: cmd 60/c0:80:48:aa:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
>                         res 43/40:c0:b0:ab:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56071.300082] ata25.00: status: { DRDY SENSE ERR }
> [56071.304712] ata25.00: error: { UNC }
> [56071.390681] ata25.00: configured for UDMA/133
> [56071.390724] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56071.390726] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
> [56071.390728] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
> [56071.390729] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 aa 48 00 00 02 c0 00 00
> [56071.390731] critical medium error, dev sdc, sector 8953113160 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
> [56071.401093] ata25: EH complete
> [56073.426604] ata25.00: exception Emask 0x0 SAct 0x1000 SErr 0x0 action 0x0
> [56073.433402] ata25.00: irq_stat 0x40000008
> [56073.437466] ata25.00: failed command: READ FPDMA QUEUED
> [56073.442727] ata25.00: cmd 60/08:60:a8:b9:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
>                         res 43/40:08:a8:b9:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56073.458694] ata25.00: status: { DRDY SENSE ERR }
> [56073.463326] ata25.00: error: { UNC }
> [56073.564919] ata25.00: configured for UDMA/133
> [56073.564929] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56073.564933] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
> [56073.564935] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
> [56073.564937] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 b9 a8 00 00 00 08 00 00
> [56073.564939] critical medium error, dev sdc, sector 8953117096 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56073.575101] ata25: EH complete
> [56074.052895] md/raid:md127: read error corrected (8 sectors at 8953082280 on dm-1)
> [56077.233232] ata25.00: exception Emask 0x0 SAct 0x83f7ff02 SErr 0x0 action 0x0
> [56077.240378] ata25.00: irq_stat 0x40000008
> [56077.244411] ata25.00: failed command: READ FPDMA QUEUED
> [56077.249645] ata25.00: cmd 60/40:88:00:d5:a5/05:00:15:02:00/40 tag 17 ncq dma 688128 in
>                         res 43/40:40:40:d5:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56077.265768] ata25.00: status: { DRDY SENSE ERR }
> [56077.270401] ata25.00: error: { UNC }
> [56077.371947] ata25.00: configured for UDMA/133
> [56077.371989] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56077.371992] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
> [56077.371994] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
> [56077.371995] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 00 00 00 05 40 00 00
> [56077.371997] critical medium error, dev sdc, sector 8953124096 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56077.382450] ata25: EH complete
> [56079.490942] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56079.498082] ata25.00: irq_stat 0x40000008
> [56079.502109] ata25.00: failed command: READ FPDMA QUEUED
> [56079.507345] ata25.00: cmd 60/c0:80:40:e2:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
>                         res 43/40:c0:40:e3:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56079.523453] ata25.00: status: { DRDY SENSE ERR }
> [56079.528103] ata25.00: error: { UNC }
> [56079.637795] ata25.00: configured for UDMA/133
> [56079.637949] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56079.637952] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
> [56079.637955] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
> [56079.637957] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 e2 40 00 00 02 c0 00 00
> [56079.637959] critical medium error, dev sdc, sector 8953127488 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
> [56079.648339] ata25: EH complete
> [56084.636191] ata25.00: exception Emask 0x0 SAct 0x3fde1800 SErr 0x0 action 0x0
> [56084.643341] ata25.00: irq_stat 0x40000008
> [56084.647380] ata25.00: failed command: READ FPDMA QUEUED
> [56084.652610] ata25.00: cmd 60/08:88:c8:ab:a5/00:00:15:02:00/40 tag 17 ncq dma 4096 in
>                         res 43/40:08:c8:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56084.668561] ata25.00: status: { DRDY SENSE ERR }
> [56084.673189] ata25.00: error: { UNC }
> [56084.760994] ata25.00: configured for UDMA/133
> [56084.761009] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56084.761012] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
> [56084.761013] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
> [56084.761015] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab c8 00 00 00 08 00 00
> [56084.761016] critical medium error, dev sdc, sector 8953113544 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56084.771196] ata25: EH complete
> [56087.035282] ata25.00: exception Emask 0x0 SAct 0x1ffc SErr 0x0 action 0x0
> [56087.042074] ata25.00: irq_stat 0x40000008
> [56087.046100] ata25.00: failed command: READ FPDMA QUEUED
> [56087.051341] ata25.00: cmd 60/08:10:d0:ab:a5/00:00:15:02:00/40 tag 2 ncq dma 4096 in
>                         res 43/40:08:d0:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56087.067187] ata25.00: status: { DRDY SENSE ERR }
> [56087.071879] ata25.00: error: { UNC }
> [56087.168480] ata25.00: configured for UDMA/133
> [56087.168495] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
> [56087.168498] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
> [56087.168501] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
> [56087.168503] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab d0 00 00 00 08 00 00
> [56087.168504] critical medium error, dev sdc, sector 8953113552 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56087.178680] ata25: EH complete
> [56087.435439] md/raid:md127: read error corrected (8 sectors at 8953078736 on dm-1)
> [56087.435437] md/raid:md127: read error corrected (8 sectors at 8953078728 on dm-1)
> [56090.450888] ata25.00: exception Emask 0x0 SAct 0x701f9ff SErr 0x0 action 0x0
> [56090.457937] ata25.00: irq_stat 0x40000008
> [56090.461967] ata25.00: failed command: READ FPDMA QUEUED
> [56090.467243] ata25.00: cmd 60/08:00:40:e3:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
>                         res 43/40:08:40:e3:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56090.483104] ata25.00: status: { DRDY SENSE ERR }
> [56090.487736] ata25.00: error: { UNC }
> [56090.708972] ata25.00: configured for UDMA/133
> [56090.708989] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56090.708992] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
> [56090.708994] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
> [56090.708997] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 e3 40 00 00 00 08 00 00
> [56090.708998] critical medium error, dev sdc, sector 8953127744 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56090.719219] ata25: EH complete
> [56094.801196] ata25.00: exception Emask 0x0 SAct 0x3ff0 SErr 0x0 action 0x0
> [56094.808023] ata25.00: irq_stat 0x40000008
> [56094.812070] ata25.00: failed command: READ FPDMA QUEUED
> [56094.817310] ata25.00: cmd 60/08:20:40:d5:a5/00:00:15:02:00/40 tag 4 ncq dma 4096 in
>                         res 43/40:08:40:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56094.833164] ata25.00: status: { DRDY SENSE ERR }
> [56094.837814] ata25.00: error: { UNC }
> [56094.940782] ata25.00: configured for UDMA/133
> [56094.940794] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56094.940797] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
> [56094.940798] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
> [56094.940800] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 40 00 00 00 08 00 00
> [56094.940801] critical medium error, dev sdc, sector 8953124160 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56094.950988] ata25: EH complete
> [56097.106784] ata25.00: exception Emask 0x0 SAct 0x1ff SErr 0x0 action 0x0
> [56097.113493] ata25.00: irq_stat 0x40000008
> [56097.117535] ata25.00: failed command: READ FPDMA QUEUED
> [56097.122766] ata25.00: cmd 60/08:00:48:d5:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
>                         res 43/40:08:48:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56097.138641] ata25.00: status: { DRDY SENSE ERR }
> [56097.143266] ata25.00: error: { UNC }
> [56097.240013] ata25.00: configured for UDMA/133
> [56097.240023] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56097.240025] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
> [56097.240027] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
> [56097.240029] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 48 00 00 00 08 00 00
> [56097.240030] critical medium error, dev sdc, sector 8953124168 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56097.250218] ata25: EH complete
> [56097.410333] md/raid:md127: read error corrected (8 sectors at 8953089344 on dm-1)
> [56097.410336] md/raid:md127: read error corrected (8 sectors at 8953089352 on dm-1)
> [56099.619970] ata25.00: exception Emask 0x0 SAct 0xfc400 SErr 0x0 action 0x0
> [56099.626852] ata25.00: irq_stat 0x40000008
> [56099.630893] ata25.00: failed command: READ FPDMA QUEUED
> [56099.636124] ata25.00: cmd 60/08:70:60:d5:a5/00:00:15:02:00/40 tag 14 ncq dma 4096 in
>                         res 43/40:08:60:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56099.652076] ata25.00: status: { DRDY SENSE ERR }
> [56099.656714] ata25.00: error: { UNC }
> [56100.347242] ata25.00: configured for UDMA/133
> [56100.347261] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
> [56100.347265] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
> [56100.347267] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
> [56100.347269] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 60 00 00 00 08 00 00
> [56100.347271] critical medium error, dev sdc, sector 8953124192 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56100.357447] ata25: EH complete
> [56102.646580] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56102.653723] ata25.00: irq_stat 0x40000008
> [56102.657799] ata25.00: failed command: READ FPDMA QUEUED
> [56102.663032] ata25.00: cmd 60/08:08:68:d5:a5/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>                         res 43/40:08:68:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56102.678903] ata25.00: status: { DRDY SENSE ERR }
> [56102.683529] ata25.00: error: { UNC }
> [56102.771380] ata25.00: configured for UDMA/133
> [56102.771406] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> [56102.771410] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
> [56102.771412] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
> [56102.771414] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 68 00 00 00 08 00 00
> [56102.771416] critical medium error, dev sdc, sector 8953124200 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56102.781675] ata25: EH complete
> [56103.029621] md/raid:md127: read error corrected (8 sectors at 8953092928 on dm-1)
> [56103.092184] md/raid:md127: read error corrected (8 sectors at 8953089376 on dm-1)
> [56103.092186] md/raid:md127: read error corrected (8 sectors at 8953089384 on dm-1)
> [56105.587003] ata25.00: exception Emask 0x0 SAct 0xbf048f84 SErr 0x0 action 0x0
> [56105.594143] ata25.00: irq_stat 0x40000008
> [56105.598252] ata25.00: failed command: READ FPDMA QUEUED
> [56105.603519] ata25.00: cmd 60/08:38:c0:9d:a5/00:00:15:02:00/40 tag 7 ncq dma 4096 in
>                         res 43/40:08:c0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56105.619401] ata25.00: status: { DRDY SENSE ERR }
> [56105.624038] ata25.00: error: { UNC }
> [56105.720389] ata25.00: configured for UDMA/133
> [56105.720406] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56105.720408] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
> [56105.720410] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
> [56105.720411] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d c0 00 00 00 08 00 00
> [56105.720412] critical medium error, dev sdc, sector 8953109952 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56105.730613] ata25: EH complete
> [56107.886332] ata25.00: exception Emask 0x0 SAct 0x801f0fe SErr 0x0 action 0x0
> [56107.893397] ata25.00: irq_stat 0x40000008
> [56107.897450] ata25.00: failed command: READ FPDMA QUEUED
> [56107.902682] ata25.00: cmd 60/08:60:d0:9d:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
>                         res 43/40:08:d0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56107.918625] ata25.00: status: { DRDY SENSE ERR }
> [56107.923257] ata25.00: error: { UNC }
> [56108.027884] ata25.00: configured for UDMA/133
> [56108.027924] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56108.027928] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
> [56108.027930] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
> [56108.027932] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d d0 00 00 00 08 00 00
> [56108.027933] critical medium error, dev sdc, sector 8953109968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56108.038104] ata25: EH complete
> [56109.020217] md/raid:md127: read error corrected (8 sectors at 8953075136 on dm-1)
> [56109.070237] md/raid:md127: read error corrected (8 sectors at 8953075152 on dm-1)
> [56128.637868] ata25.00: exception Emask 0x0 SAct 0x1f08400 SErr 0x0 action 0x0
> [56128.644996] ata25.00: irq_stat 0x40000008
> [56128.649038] ata25.00: failed command: READ FPDMA QUEUED
> [56128.654277] ata25.00: cmd 60/40:50:08:b5:bf/05:00:15:02:00/40 tag 10 ncq dma 688128 in
>                         res 43/40:40:50:b9:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56128.670394] ata25.00: status: { DRDY SENSE ERR }
> [56128.675024] ata25.00: error: { UNC }
> [56128.778932] ata25.00: configured for UDMA/133
> [56128.778969] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56128.778973] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
> [56128.778975] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
> [56128.778978] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 15 bf b5 08 00 00 05 40 00 00
> [56128.778980] critical medium error, dev sdc, sector 8954819848 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56128.789429] ata25: EH complete
> [56140.808042] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56140.815183] ata25.00: irq_stat 0x40000008
> [56140.819207] ata25.00: failed command: READ FPDMA QUEUED
> [56140.824447] ata25.00: cmd 60/c0:58:50:fa:c1/02:00:15:02:00/40 tag 11 ncq dma 360448 in
>                         res 43/40:c0:88:fa:c1/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56140.840704] ata25.00: status: { DRDY SENSE ERR }
> [56140.845334] ata25.00: error: { UNC }
> [56140.933033] ata25.00: configured for UDMA/133
> [56140.933158] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56140.933161] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
> [56140.933163] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
> [56140.933166] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 50 00 00 02 c0 00 00
> [56140.933168] critical medium error, dev sdc, sector 8954968656 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
> [56140.943561] ata25: EH complete
> [56151.197018] ata25.00: exception Emask 0x0 SAct 0x90a00708 SErr 0x0 action 0x0
> [56151.204161] ata25.00: irq_stat 0x40000008
> [56151.208245] ata25.00: failed command: READ FPDMA QUEUED
> [56151.213520] ata25.00: cmd 60/08:b8:50:b9:bf/00:00:15:02:00/40 tag 23 ncq dma 4096 in
>                         res 43/40:08:50:b9:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56151.229462] ata25.00: status: { DRDY SENSE ERR }
> [56151.234143] ata25.00: error: { UNC }
> [56151.346056] ata25.00: configured for UDMA/133
> [56151.346077] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56151.346081] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
> [56151.346083] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
> [56151.346085] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 15 bf b9 50 00 00 00 08 00 00
> [56151.346087] critical medium error, dev sdc, sector 8954820944 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56151.356272] ata25: EH complete
> [56151.993105] md/raid:md127: read error corrected (8 sectors at 8954786128 on dm-1)
> [56160.208102] ata25.00: exception Emask 0x0 SAct 0x6000 SErr 0x0 action 0x0
> [56160.214893] ata25.00: irq_stat 0x40000008
> [56160.218993] ata25.00: failed command: READ FPDMA QUEUED
> [56160.224233] ata25.00: cmd 60/30:68:00:dd:bf/09:00:15:02:00/40 tag 13 ncq dma 1204224 in
>                         res 43/40:30:c8:e2:bf/00:09:15:02:00/00 Emask 0x408 (media error) <F>
> [56160.240498] ata25.00: status: { DRDY SENSE ERR }
> [56160.245133] ata25.00: error: { UNC }
> [56160.342920] ata25.00: configured for UDMA/133
> [56160.342941] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
> [56160.342944] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
> [56160.342946] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
> [56160.342949] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf dd 00 00 00 09 30 00 00
> [56160.342950] critical medium error, dev sdc, sector 8954830080 op 0x0:(READ) flags 0x84700 phys_seg 138 prio class 0
> [56160.353550] ata25: EH complete
> [56162.898662] ata25.00: exception Emask 0x0 SAct 0xff033fff SErr 0x0 action 0x0
> [56162.905798] ata25.00: irq_stat 0x40000008
> [56162.909825] ata25.00: failed command: READ FPDMA QUEUED
> [56162.915066] ata25.00: cmd 60/08:c0:88:fa:c1/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>                         res 43/40:08:88:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56162.931020] ata25.00: status: { DRDY SENSE ERR }
> [56162.935724] ata25.00: error: { UNC }
> [56163.025343] ata25.00: configured for UDMA/133
> [56163.025377] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56163.025380] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
> [56163.025381] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
> [56163.025383] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 88 00 00 00 08 00 00
> [56163.025384] critical medium error, dev sdc, sector 8954968712 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56163.035579] ata25: EH complete
> [56165.407934] ata25.00: exception Emask 0x0 SAct 0x3f00301e SErr 0x0 action 0x0
> [56165.415071] ata25.00: irq_stat 0x40000008
> [56165.419099] ata25.00: failed command: READ FPDMA QUEUED
> [56165.424332] ata25.00: cmd 60/08:08:a0:fa:c1/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>                         res 43/40:08:a0:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56165.440240] ata25.00: status: { DRDY SENSE ERR }
> [56165.444882] ata25.00: error: { UNC }
> [56165.541134] ata25.00: configured for UDMA/133
> [56165.541174] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56165.541178] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
> [56165.541180] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
> [56165.541182] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa a0 00 00 00 08 00 00
> [56165.541183] critical medium error, dev sdc, sector 8954968736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56165.551397] ata25: EH complete
> [56165.908338] md/raid:md127: read error corrected (8 sectors at 8954933896 on dm-1)
> [56165.908399] md/raid:md127: read error corrected (8 sectors at 8954933920 on dm-1)
> [56168.776163] ata25.00: exception Emask 0x0 SAct 0x3e00 SErr 0x0 action 0x0
> [56168.782972] ata25.00: irq_stat 0x40000008
> [56168.787088] ata25.00: failed command: READ FPDMA QUEUED
> [56168.792331] ata25.00: cmd 60/68:48:00:15:c2/05:00:15:02:00/40 tag 9 ncq dma 708608 in
>                         res 43/40:68:38:16:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56168.808360] ata25.00: status: { DRDY SENSE ERR }
> [56168.812995] ata25.00: error: { UNC }
> [56168.914982] ata25.00: configured for UDMA/133
> [56168.915022] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56168.915026] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
> [56168.915028] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
> [56168.915030] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 c2 15 00 00 00 05 68 00 00
> [56168.915032] critical medium error, dev sdc, sector 8954975488 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56168.925513] ata25: EH complete
> [56171.028271] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56171.035404] ata25.00: irq_stat 0x40000008
> [56171.039425] ata25.00: failed command: READ FPDMA QUEUED
> [56171.044663] ata25.00: cmd 60/88:b0:68:22:c2/05:00:15:02:00/40 tag 22 ncq dma 724992 in
>                         res 43/40:88:40:24:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56171.060767] ata25.00: status: { DRDY SENSE ERR }
> [56171.065396] ata25.00: error: { UNC }
> [56171.197504] ata25.00: configured for UDMA/133
> [56171.197561] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56171.197563] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
> [56171.197565] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
> [56171.197567] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 22 68 00 00 05 88 00 00
> [56171.197568] critical medium error, dev sdc, sector 8954978920 op 0x0:(READ) flags 0x84700 phys_seg 89 prio class 0
> [56171.207924] ata25: EH complete
> [56173.646130] ata25.00: exception Emask 0x0 SAct 0x1f830fc SErr 0x0 action 0x0
> [56173.653181] ata25.00: irq_stat 0x40000008
> [56173.657206] ata25.00: failed command: READ FPDMA QUEUED
> [56173.662452] ata25.00: cmd 60/08:98:c8:e2:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
>                         res 43/40:08:c8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56173.678392] ata25.00: status: { DRDY SENSE ERR }
> [56173.683024] ata25.00: error: { UNC }
> [56173.788276] ata25.00: configured for UDMA/133
> [56173.788298] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56173.788301] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
> [56173.788302] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
> [56173.788304] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 c8 00 00 00 08 00 00
> [56173.788305] critical medium error, dev sdc, sector 8954831560 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56173.798484] ata25: EH complete
> [56176.164900] ata25.00: exception Emask 0x0 SAct 0x7f003c40 SErr 0x0 action 0x0
> [56176.172042] ata25.00: irq_stat 0x40000008
> [56176.176075] ata25.00: failed command: READ FPDMA QUEUED
> [56176.181310] ata25.00: cmd 60/08:c0:d8:e2:bf/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>                         res 43/40:08:d8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56176.197248] ata25.00: status: { DRDY SENSE ERR }
> [56176.201885] ata25.00: error: { UNC }
> [56176.304061] ata25.00: configured for UDMA/133
> [56176.304083] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56176.304086] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
> [56176.304088] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
> [56176.304091] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 d8 00 00 00 08 00 00
> [56176.304092] critical medium error, dev sdc, sector 8954831576 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56176.314265] ata25: EH complete
> [56178.758401] ata25.00: exception Emask 0x0 SAct 0xf0000 SErr 0x0 action 0x0
> [56178.765286] ata25.00: irq_stat 0x40000008
> [56178.769308] ata25.00: failed command: READ FPDMA QUEUED
> [56178.774543] ata25.00: cmd 60/08:80:f0:e2:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
>                         res 43/40:08:f0:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56178.790490] ata25.00: status: { DRDY SENSE ERR }
> [56178.795122] ata25.00: error: { UNC }
> [56178.903147] ata25.00: configured for UDMA/133
> [56178.903159] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
> [56178.903162] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
> [56178.903164] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
> [56178.903166] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 f0 00 00 00 08 00 00
> [56178.903168] critical medium error, dev sdc, sector 8954831600 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56178.913335] ata25: EH complete
> [56179.970979] md/raid:md127: read error corrected (8 sectors at 8954796744 on dm-1)
> [56179.995987] md/raid:md127: read error corrected (8 sectors at 8954796760 on dm-1)
> [56180.127216] md/raid:md127: read error corrected (8 sectors at 8954796784 on dm-1)
> [56182.417776] ata25.00: exception Emask 0x0 SAct 0x82a33a10 SErr 0x0 action 0x0
> [56182.424916] ata25.00: irq_stat 0x40000008
> [56182.428940] ata25.00: failed command: READ FPDMA QUEUED
> [56182.434181] ata25.00: cmd 60/00:68:00:f0:bf/05:00:15:02:00/40 tag 13 ncq dma 655360 in
>                         res 43/40:00:a8:f0:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56182.450294] ata25.00: status: { DRDY SENSE ERR }
> [56182.454927] ata25.00: error: { UNC }
> [56182.860118] ata25.00: configured for UDMA/133
> [56182.860160] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56182.860162] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
> [56182.860164] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
> [56182.860166] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 00 00 00 05 00 00 00
> [56182.860167] critical medium error, dev sdc, sector 8954834944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
> [56182.870623] ata25: EH complete
> [56187.499909] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56187.507047] ata25.00: irq_stat 0x40000008
> [56187.511096] ata25.00: failed command: READ FPDMA QUEUED
> [56187.516328] ata25.00: cmd 60/c0:38:40:0a:c0/02:00:15:02:00/40 tag 7 ncq dma 360448 in
>                         res 43/40:c0:90:0c:c0/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56187.532352] ata25.00: status: { DRDY SENSE ERR }
> [56187.536977] ata25.00: error: { UNC }
> [56187.683438] ata25.00: configured for UDMA/133
> [56187.683565] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
> [56187.683568] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
> [56187.683571] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
> [56187.683573] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 c0 0a 40 00 00 02 c0 00 00
> [56187.683574] critical medium error, dev sdc, sector 8954841664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
> [56187.693985] ata25: EH complete
> [56190.600849] ata25.00: exception Emask 0x0 SAct 0x3c00000 SErr 0x0 action 0x0
> [56190.607902] ata25.00: irq_stat 0x40000008
> [56190.611931] ata25.00: failed command: READ FPDMA QUEUED
> [56190.617167] ata25.00: cmd 60/08:b0:40:24:c2/00:00:15:02:00/40 tag 22 ncq dma 4096 in
>                         res 43/40:08:40:24:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56190.633103] ata25.00: status: { DRDY SENSE ERR }
> [56190.637732] ata25.00: error: { UNC }
> [56190.749035] ata25.00: configured for UDMA/133
> [56190.749045] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56190.749047] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
> [56190.749049] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
> [56190.749051] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 24 40 00 00 00 08 00 00
> [56190.749052] critical medium error, dev sdc, sector 8954979392 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56190.759221] ata25: EH complete
> [56192.116233] md/raid:md127: read error corrected (8 sectors at 8954944576 on dm-1)
> [56194.231404] ata25.00: exception Emask 0x0 SAct 0xffe00083 SErr 0x0 action 0x0
> [56194.238545] ata25.00: irq_stat 0x40000008
> [56194.242576] ata25.00: failed command: READ FPDMA QUEUED
> [56194.247816] ata25.00: cmd 60/08:a8:90:0c:c0/00:00:15:02:00/40 tag 21 ncq dma 4096 in
>                         res 43/40:08:90:0c:c0/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56194.263754] ata25.00: status: { DRDY SENSE ERR }
> [56194.268384] ata25.00: error: { UNC }
> [56194.372726] ata25.00: configured for UDMA/133
> [56194.372748] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56194.372752] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
> [56194.372754] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
> [56194.372756] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 15 c0 0c 90 00 00 00 08 00 00
> [56194.372758] critical medium error, dev sdc, sector 8954842256 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56194.382955] ata25: EH complete
> [56196.397118] ata25.00: exception Emask 0x0 SAct 0xffcf3fff SErr 0x0 action 0x0
> [56196.404260] ata25.00: irq_stat 0x40000008
> [56196.408299] ata25.00: failed command: READ FPDMA QUEUED
> [56196.413528] ata25.00: cmd 60/08:80:a8:f0:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
>                         res 43/40:08:a8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56196.429462] ata25.00: status: { DRDY SENSE ERR }
> [56196.434090] ata25.00: error: { UNC }
> [56196.530294] ata25.00: configured for UDMA/133
> [56196.530341] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56196.530345] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
> [56196.530347] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
> [56196.530349] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 a8 00 00 00 08 00 00
> [56196.530351] critical medium error, dev sdc, sector 8954835112 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56196.540554] ata25: EH complete
> [56198.681800] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56198.688940] ata25.00: irq_stat 0x40000008
> [56198.692969] ata25.00: failed command: READ FPDMA QUEUED
> [56198.698203] ata25.00: cmd 60/08:48:b0:f0:bf/00:00:15:02:00/40 tag 9 ncq dma 4096 in
>                         res 43/40:08:b0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56198.714053] ata25.00: status: { DRDY SENSE ERR }
> [56198.718683] ata25.00: error: { UNC }
> [56198.804495] ata25.00: configured for UDMA/133
> [56198.804544] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56198.804547] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
> [56198.804548] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
> [56198.804550] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b0 00 00 00 08 00 00
> [56198.804551] critical medium error, dev sdc, sector 8954835120 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56198.814780] ata25: EH complete
> [56201.562060] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56201.569195] ata25.00: irq_stat 0x40000008
> [56201.573222] ata25.00: failed command: READ FPDMA QUEUED
> [56201.578453] ata25.00: cmd 60/08:98:b8:f0:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
>                         res 43/40:08:b8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56201.594397] ata25.00: status: { DRDY SENSE ERR }
> [56201.599028] ata25.00: error: { UNC }
> [56201.686862] ata25.00: configured for UDMA/133
> [56201.686971] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
> [56201.686975] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
> [56201.686977] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
> [56201.686980] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b8 00 00 00 08 00 00
> [56201.686981] critical medium error, dev sdc, sector 8954835128 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56201.697185] ata25: EH complete
> [56204.153189] ata25.00: exception Emask 0x0 SAct 0x7eb9f001 SErr 0x0 action 0x0
> [56204.160325] ata25.00: irq_stat 0x40000008
> [56204.164349] ata25.00: failed command: READ FPDMA QUEUED
> [56204.169575] ata25.00: cmd 60/08:70:d0:f0:bf/00:00:15:02:00/40 tag 14 ncq dma 4096 in
>                         res 43/40:08:d0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56204.185512] ata25.00: status: { DRDY SENSE ERR }
> [56204.190140] ata25.00: error: { UNC }
> [56204.285936] ata25.00: configured for UDMA/133
> [56204.285962] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> [56204.285966] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
> [56204.285968] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
> [56204.285969] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 d0 00 00 00 08 00 00
> [56204.285970] critical medium error, dev sdc, sector 8954835152 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56204.296157] ata25: EH complete
> [56204.675513] md/raid:md127: read error corrected (8 sectors at 8954800296 on dm-1)
> [56204.675539] md/raid:md127: read error corrected (8 sectors at 8954807440 on dm-1)
> [56205.744062] md/raid:md127: read error corrected (8 sectors at 8954800304 on dm-1)
> [56205.744063] md/raid:md127: read error corrected (8 sectors at 8954800312 on dm-1)
> [56205.762220] md/raid:md127: read error corrected (8 sectors at 8954800336 on dm-1)
> [56208.131813] ata25.00: exception Emask 0x0 SAct 0x3ff87c SErr 0x0 action 0x0
> [56208.138777] ata25.00: irq_stat 0x40000008
> [56208.142810] ata25.00: failed command: READ FPDMA QUEUED
> [56208.148045] ata25.00: cmd 60/08:58:38:16:c2/00:00:15:02:00/40 tag 11 ncq dma 4096 in
>                         res 43/40:08:38:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56208.163988] ata25.00: status: { DRDY SENSE ERR }
> [56208.168617] ata25.00: error: { UNC }
> [56208.251256] ata25.00: configured for UDMA/133
> [56208.251278] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56208.251281] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
> [56208.251283] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
> [56208.251286] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 38 00 00 00 08 00 00
> [56208.251287] critical medium error, dev sdc, sector 8954975800 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56208.261478] ata25: EH complete
> [56210.458900] ata25.00: exception Emask 0x0 SAct 0x57c0154b SErr 0x0 action 0x0
> [56210.466042] ata25.00: irq_stat 0x40000008
> [56210.470074] ata25.00: failed command: READ FPDMA QUEUED
> [56210.475307] ata25.00: cmd 60/08:e0:48:16:c2/00:00:15:02:00/40 tag 28 ncq dma 4096 in
>                         res 43/40:08:48:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56210.491246] ata25.00: status: { DRDY SENSE ERR }
> [56210.495877] ata25.00: error: { UNC }
> [56210.592096] ata25.00: configured for UDMA/133
> [56210.592116] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56210.592118] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
> [56210.592120] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
> [56210.592122] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 48 00 00 00 08 00 00
> [56210.592123] critical medium error, dev sdc, sector 8954975816 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56210.602295] ata25: EH complete
> [56212.738574] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56212.745711] ata25.00: irq_stat 0x40000008
> [56212.749733] ata25.00: failed command: READ FPDMA QUEUED
> [56212.754964] ata25.00: cmd 60/08:68:60:16:c2/00:00:15:02:00/40 tag 13 ncq dma 4096 in
>                         res 43/40:08:60:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56212.770902] ata25.00: status: { DRDY SENSE ERR }
> [56212.775529] ata25.00: error: { UNC }
> [56212.857966] ata25.00: configured for UDMA/133
> [56212.858023] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
> [56212.858026] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
> [56212.858028] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
> [56212.858031] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 60 00 00 00 08 00 00
> [56212.858032] critical medium error, dev sdc, sector 8954975840 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56212.868220] ata25: EH complete
> [56222.496422] ata25.00: exception Emask 0x0 SAct 0x60000000 SErr 0x0 action 0x0
> [56222.503555] ata25.00: irq_stat 0x40000008
> [56222.507581] ata25.00: failed command: READ FPDMA QUEUED
> [56222.512806] ata25.00: cmd 60/00:e8:00:25:c0/08:00:15:02:00/40 tag 29 ncq dma 1048576 in
>                         res 43/40:00:50:28:c0/00:08:15:02:00/00 Emask 0x408 (media error) <F>
> [56222.529006] ata25.00: status: { DRDY SENSE ERR }
> [56222.533632] ata25.00: error: { UNC }
> [56222.671224] ata25.00: configured for UDMA/133
> [56222.671234] sd 24:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> [56222.671237] sd 24:0:0:0: [sdc] tag#29 Sense Key : Aborted Command [current] 
> [56222.671239] sd 24:0:0:0: [sdc] tag#29 <<vendor>>ASC=0x80 ASCQ=0x2 
> [56222.671241] sd 24:0:0:0: [sdc] tag#29 CDB: Read(16) 88 00 00 00 00 02 15 c0 25 00 00 00 08 00 00 00
> [56222.671242] I/O error, dev sdc, sector 8954848512 op 0x0:(READ) flags 0x84700 phys_seg 30 prio class 0
> [56222.680538] ata25: EH complete
> [56222.688487] md/raid:md127: read error corrected (8 sectors at 8954940984 on dm-1)
> [56222.688713] md/raid:md127: read error corrected (8 sectors at 8954941000 on dm-1)
> [56222.694091] md/raid:md127: read error corrected (8 sectors at 8954941024 on dm-1)
> [56268.461816] ata25.00: exception Emask 0x0 SAct 0x3c015f SErr 0x0 action 0x0
> [56268.468783] ata25.00: irq_stat 0x40000008
> [56268.472807] ata25.00: failed command: READ FPDMA QUEUED
> [56268.478039] ata25.00: cmd 60/68:00:98:08:f9/07:00:15:02:00/40 tag 0 ncq dma 970752 in
>                         res 43/40:68:f0:0c:f9/00:07:15:02:00/00 Emask 0x408 (media error) <F>
> [56268.494065] ata25.00: status: { DRDY SENSE ERR }
> [56268.498696] ata25.00: error: { UNC }
> [56268.613464] ata25.00: configured for UDMA/133
> [56268.613500] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> [56268.613503] sd 24:0:0:0: [sdc] tag#0 Sense Key : Aborted Command [current] 
> [56268.613505] sd 24:0:0:0: [sdc] tag#0 <<vendor>>ASC=0x80 ASCQ=0x2 
> [56268.613507] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 f9 08 98 00 00 07 68 00 00
> [56268.613508] I/O error, dev sdc, sector 8958576792 op 0x0:(READ) flags 0x80700 phys_seg 112 prio class 0
> [56268.622926] ata25: EH complete
> [56274.436128] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56274.443268] ata25.00: irq_stat 0x40000008
> [56274.447298] ata25.00: failed command: READ FPDMA QUEUED
> [56274.452542] ata25.00: cmd 60/c0:f8:58:82:fb/02:00:15:02:00/40 tag 31 ncq dma 360448 in
>                         res 43/40:c0:c0:84:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56274.468653] ata25.00: status: { DRDY SENSE ERR }
> [56274.473431] ata25.00: error: { UNC }
> [56274.569728] ata25.00: configured for UDMA/133
> [56274.569846] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
> [56274.569849] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
> [56274.569851] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
> [56274.569853] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 15 fb 82 58 00 00 02 c0 00 00
> [56274.569854] critical medium error, dev sdc, sector 8958739032 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
> [56274.580225] ata25: EH complete
> [56277.816375] ata25.00: exception Emask 0x0 SAct 0x7c00003f SErr 0x0 action 0x0
> [56277.823514] ata25.00: irq_stat 0x40000008
> [56277.827538] ata25.00: failed command: READ FPDMA QUEUED
> [56277.832776] ata25.00: cmd 60/08:d0:c0:84:fb/00:00:15:02:00/40 tag 26 ncq dma 4096 in
>                         res 43/40:08:c0:84:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56277.848709] ata25.00: status: { DRDY SENSE ERR }
> [56277.853339] ata25.00: error: { UNC }
> [56277.943562] ata25.00: configured for UDMA/133
> [56277.943583] sd 24:0:0:0: [sdc] tag#26 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56277.943586] sd 24:0:0:0: [sdc] tag#26 Sense Key : Medium Error [current] 
> [56277.943588] sd 24:0:0:0: [sdc] tag#26 Add. Sense: Unrecovered read error
> [56277.943590] sd 24:0:0:0: [sdc] tag#26 CDB: Read(16) 88 00 00 00 00 02 15 fb 84 c0 00 00 00 08 00 00
> [56277.943591] critical medium error, dev sdc, sector 8958739648 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56277.953781] ata25: EH complete
> [56278.035244] md/raid:md127: read error corrected (8 sectors at 8958704832 on dm-1)
> [56282.414840] ata25.00: exception Emask 0x0 SAct 0x1f00 SErr 0x0 action 0x0
> [56282.421632] ata25.00: irq_stat 0x40000008
> [56282.425664] ata25.00: failed command: READ FPDMA QUEUED
> [56282.430892] ata25.00: cmd 60/40:40:00:9d:fb/05:00:15:02:00/40 tag 8 ncq dma 688128 in
>                         res 43/40:40:70:a0:fb/00:05:15:02:00/00 Emask 0x408 (media error) <F>
> [56282.446907] ata25.00: status: { DRDY SENSE ERR }
> [56282.451530] ata25.00: error: { UNC }
> [56282.550283] ata25.00: configured for UDMA/133
> [56282.550306] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56282.550309] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
> [56282.550311] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Unrecovered read error
> [56282.550313] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 15 fb 9d 00 00 00 05 40 00 00
> [56282.550314] critical medium error, dev sdc, sector 8958745856 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56282.560764] ata25: EH complete
> [56284.651765] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56284.658903] ata25.00: irq_stat 0x40000008
> [56284.662929] ata25.00: failed command: READ FPDMA QUEUED
> [56284.668166] ata25.00: cmd 60/f8:68:08:ad:fb/02:00:15:02:00/40 tag 13 ncq dma 389120 in
>                         res 43/40:f8:50:ae:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
> [56284.684304] ata25.00: status: { DRDY SENSE ERR }
> [56284.688940] ata25.00: error: { UNC }
> [56284.782803] ata25.00: configured for UDMA/133
> [56284.782879] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56284.782882] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
> [56284.782884] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
> [56284.782886] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 fb ad 08 00 00 02 f8 00 00
> [56284.782887] critical medium error, dev sdc, sector 8958749960 op 0x0:(READ) flags 0x80700 phys_seg 94 prio class 0
> [56284.793258] ata25: EH complete
> [56291.737028] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56291.744159] ata25.00: irq_stat 0x40000008
> [56291.748188] ata25.00: failed command: READ FPDMA QUEUED
> [56291.753427] ata25.00: cmd 60/08:c8:50:ae:fb/00:00:15:02:00/40 tag 25 ncq dma 4096 in
>                         res 43/40:08:50:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56291.769386] ata25.00: status: { DRDY SENSE ERR }
> [56291.774088] ata25.00: error: { UNC }
> [56291.880394] ata25.00: configured for UDMA/133
> [56291.880466] sd 24:0:0:0: [sdc] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56291.880470] sd 24:0:0:0: [sdc] tag#25 Sense Key : Medium Error [current] 
> [56291.880472] sd 24:0:0:0: [sdc] tag#25 Add. Sense: Unrecovered read error
> [56291.880475] sd 24:0:0:0: [sdc] tag#25 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 50 00 00 00 08 00 00
> [56291.880476] critical medium error, dev sdc, sector 8958750288 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56291.890675] ata25: EH complete
> [56294.678468] ata25.00: exception Emask 0x0 SAct 0x3800800 SErr 0x0 action 0x0
> [56294.685525] ata25.00: irq_stat 0x40000008
> [56294.689551] ata25.00: failed command: READ FPDMA QUEUED
> [56294.694778] ata25.00: cmd 60/08:c0:60:ae:fb/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>                         res 43/40:08:60:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56294.710714] ata25.00: status: { DRDY SENSE ERR }
> [56294.715342] ata25.00: error: { UNC }
> [56294.804414] ata25.00: configured for UDMA/133
> [56294.804432] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
> [56294.804436] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
> [56294.804438] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
> [56294.804440] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 60 00 00 00 08 00 00
> [56294.804442] critical medium error, dev sdc, sector 8958750304 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56294.814622] ata25: EH complete
> [56294.823246] md/raid:md127: read error corrected (8 sectors at 8958715472 on dm-1)
> [56295.533704] md/raid:md127: read error corrected (8 sectors at 8958715488 on dm-1)
> [56297.739116] ata25.00: exception Emask 0x0 SAct 0x811807fe SErr 0x0 action 0x0
> [56297.746250] ata25.00: irq_stat 0x40000008
> [56297.750274] ata25.00: failed command: READ FPDMA QUEUED
> [56297.755507] ata25.00: cmd 60/08:08:70:a0:fb/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>                         res 43/40:08:70:a0:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
> [56297.771358] ata25.00: status: { DRDY SENSE ERR }
> [56297.775986] ata25.00: error: { UNC }
> [56297.861641] ata25.00: configured for UDMA/133
> [56297.861658] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56297.861661] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
> [56297.861664] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
> [56297.861666] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 fb a0 70 00 00 00 08 00 00
> [56297.861668] critical medium error, dev sdc, sector 8958746736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56297.871852] ata25: EH complete
> [56298.209332] md/raid:md127: read error corrected (8 sectors at 8958711920 on dm-1)
> [56305.006612] ata25.00: exception Emask 0x0 SAct 0x3f28f00 SErr 0x0 action 0x0
> [56305.013664] ata25.00: irq_stat 0x40000008
> [56305.017691] ata25.00: failed command: READ FPDMA QUEUED
> [56305.022937] ata25.00: cmd 60/00:78:00:90:15/05:00:16:02:00/40 tag 15 ncq dma 655360 in
>                         res 43/40:00:40:93:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
> [56305.039048] ata25.00: status: { DRDY SENSE ERR }
> [56305.043677] ata25.00: error: { UNC }
> [56305.168764] ata25.00: configured for UDMA/133
> [56305.168820] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56305.168824] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
> [56305.168826] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
> [56305.168828] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 15 90 00 00 00 05 00 00 00
> [56305.168830] critical medium error, dev sdc, sector 8960446464 op 0x0:(READ) flags 0x80700 phys_seg 159 prio class 0
> [56305.179276] ata25: EH complete
> [56307.266729] ata25.00: exception Emask 0x0 SAct 0xec SErr 0x0 action 0x0
> [56307.273348] ata25.00: irq_stat 0x40000008
> [56307.277379] ata25.00: failed command: READ FPDMA QUEUED
> [56307.282614] ata25.00: cmd 60/40:10:00:9d:15/05:00:16:02:00/40 tag 2 ncq dma 688128 in
>                         res 43/40:40:20:a1:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
> [56307.298638] ata25.00: status: { DRDY SENSE ERR }
> [56307.303270] ata25.00: error: { UNC }
> [56307.399985] ata25.00: configured for UDMA/133
> [56307.400014] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56307.400017] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
> [56307.400019] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
> [56307.400021] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 16 15 9d 00 00 00 05 40 00 00
> [56307.400023] critical medium error, dev sdc, sector 8960449792 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56307.410470] ata25: EH complete
> [56310.075585] ata25.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
> [56310.082119] ata25.00: irq_stat 0x40000008
> [56310.086146] ata25.00: failed command: READ FPDMA QUEUED
> [56310.091383] ata25.00: cmd 60/08:00:40:93:15/00:00:16:02:00/40 tag 0 ncq dma 4096 in
>                         res 43/40:08:40:93:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56310.107240] ata25.00: status: { DRDY SENSE ERR }
> [56310.111869] ata25.00: error: { UNC }
> [56310.249012] ata25.00: configured for UDMA/133
> [56310.249025] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56310.249028] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
> [56310.249031] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
> [56310.249033] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 15 93 40 00 00 00 08 00 00
> [56310.249035] critical medium error, dev sdc, sector 8960447296 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56310.259210] ata25: EH complete
> [56312.894324] md/raid:md127: read error corrected (8 sectors at 8960412480 on dm-1)
> [56315.319228] ata25.00: exception Emask 0x0 SAct 0xe0000 SErr 0x0 action 0x0
> [56315.326108] ata25.00: irq_stat 0x40000008
> [56315.330136] ata25.00: failed command: READ FPDMA QUEUED
> [56315.335376] ata25.00: cmd 60/08:88:20:a1:15/00:00:16:02:00/40 tag 17 ncq dma 4096 in
>                         res 43/40:08:20:a1:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56315.351313] ata25.00: status: { DRDY SENSE ERR }
> [56315.355946] ata25.00: error: { UNC }
> [56315.488843] ata25.00: configured for UDMA/133
> [56315.488853] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56315.488856] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
> [56315.488857] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
> [56315.488859] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 15 a1 20 00 00 00 08 00 00
> [56315.488861] critical medium error, dev sdc, sector 8960450848 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56315.499030] ata25: EH complete
> [56316.119276] md/raid:md127: read error corrected (8 sectors at 8960416032 on dm-1)
> [56324.775934] ata25.00: exception Emask 0x0 SAct 0x7000000f SErr 0x0 action 0x0
> [56324.783070] ata25.00: irq_stat 0x40000008
> [56324.787095] ata25.00: failed command: READ FPDMA QUEUED
> [56324.792337] ata25.00: cmd 60/c0:e0:68:ca:34/02:00:16:02:00/40 tag 28 ncq dma 360448 in
>                         res 43/40:c0:88:cb:34/00:02:16:02:00/00 Emask 0x408 (media error) <F>
> [56324.808453] ata25.00: status: { DRDY SENSE ERR }
> [56324.813084] ata25.00: error: { UNC }
> [56324.935531] ata25.00: configured for UDMA/133
> [56324.935573] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
> [56324.935577] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
> [56324.935579] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
> [56324.935582] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 16 34 ca 68 00 00 02 c0 00 00
> [56324.935583] critical medium error, dev sdc, sector 8962493032 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
> [56324.945933] ata25: EH complete
> [56327.568023] ata25.00: exception Emask 0x0 SAct 0x7f8001ff SErr 0x0 action 0x0
> [56327.575156] ata25.00: irq_stat 0x40000008
> [56327.579185] ata25.00: failed command: READ FPDMA QUEUED
> [56327.584415] ata25.00: cmd 60/08:b8:88:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
>                         res 43/40:08:88:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56327.600346] ata25.00: status: { DRDY SENSE ERR }
> [56327.604973] ata25.00: error: { UNC }
> [56327.692932] ata25.00: configured for UDMA/133
> [56327.692968] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56327.692971] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
> [56327.692973] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
> [56327.692976] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 88 00 00 00 08 00 00
> [56327.692978] critical medium error, dev sdc, sector 8962493320 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56327.703153] ata25: EH complete
> [56330.009432] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56330.016578] ata25.00: irq_stat 0x40000008
> [56330.020604] ata25.00: failed command: READ FPDMA QUEUED
> [56330.025845] ata25.00: cmd 60/08:b8:90:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
>                         res 43/40:08:90:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56330.041782] ata25.00: status: { DRDY SENSE ERR }
> [56330.046407] ata25.00: error: { UNC }
> [56330.142059] ata25.00: configured for UDMA/133
> [56330.142130] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56330.142133] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
> [56330.142135] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
> [56330.142136] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 90 00 00 00 08 00 00
> [56330.142138] critical medium error, dev sdc, sector 8962493328 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56330.152328] ata25: EH complete
> [56330.473245] md/raid:md127: read error corrected (8 sectors at 8962458504 on dm-1)
> [56330.475305] md/raid:md127: read error corrected (8 sectors at 8962458512 on dm-1)
> [56381.913875] ata25.00: exception Emask 0x0 SAct 0x8 SErr 0x0 action 0x0
> [56381.920405] ata25.00: irq_stat 0x40000008
> [56381.924429] ata25.00: failed command: READ FPDMA QUEUED
> [56381.929664] ata25.00: cmd 60/00:18:00:d0:6b/05:00:16:02:00/40 tag 3 ncq dma 655360 in
>                         res 43/40:00:18:d0:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
> [56381.945685] ata25.00: status: { DRDY SENSE ERR }
> [56381.950313] ata25.00: error: { UNC }
> [56382.040583] ata25.00: configured for UDMA/133
> [56382.040598] sd 24:0:0:0: [sdc] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56382.040601] sd 24:0:0:0: [sdc] tag#3 Sense Key : Medium Error [current] 
> [56382.040603] sd 24:0:0:0: [sdc] tag#3 Add. Sense: Unrecovered read error
> [56382.040605] sd 24:0:0:0: [sdc] tag#3 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 00 00 00 05 00 00 00
> [56382.040606] critical medium error, dev sdc, sector 8966098944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
> [56382.051045] ata25: EH complete
> [56384.109723] ata25.00: exception Emask 0x0 SAct 0x7fe00000 SErr 0x0 action 0x0
> [56384.116861] ata25.00: irq_stat 0x40000008
> [56384.120885] ata25.00: failed command: READ FPDMA QUEUED
> [56384.126117] ata25.00: cmd 60/40:a8:00:dd:6b/05:00:16:02:00/40 tag 21 ncq dma 688128 in
>                         res 43/40:40:f8:dd:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
> [56384.142231] ata25.00: status: { DRDY SENSE ERR }
> [56384.146862] ata25.00: error: { UNC }
> [56384.264767] ata25.00: configured for UDMA/133
> [56384.264806] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56384.264809] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
> [56384.264810] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
> [56384.264812] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 16 6b dd 00 00 00 05 40 00 00
> [56384.264813] critical medium error, dev sdc, sector 8966102272 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [56384.275275] ata25: EH complete
> [56386.346397] ata25.00: exception Emask 0x0 SAct 0x7f7d0 SErr 0x0 action 0x0
> [56386.353275] ata25.00: irq_stat 0x40000008
> [56386.357302] ata25.00: failed command: READ FPDMA QUEUED
> [56386.362537] ata25.00: cmd 60/c0:30:40:ea:6b/02:00:16:02:00/40 tag 6 ncq dma 360448 in
>                         res 43/40:c0:d8:eb:6b/00:02:16:02:00/00 Emask 0x408 (media error) <F>
> [56386.378556] ata25.00: status: { DRDY SENSE ERR }
> [56386.383185] ata25.00: error: { UNC }
> [56386.472369] ata25.00: configured for UDMA/133
> [56386.472430] sd 24:0:0:0: [sdc] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56386.472433] sd 24:0:0:0: [sdc] tag#6 Sense Key : Medium Error [current] 
> [56386.472435] sd 24:0:0:0: [sdc] tag#6 Add. Sense: Unrecovered read error
> [56386.472438] sd 24:0:0:0: [sdc] tag#6 CDB: Read(16) 88 00 00 00 00 02 16 6b ea 40 00 00 02 c0 00 00
> [56386.472439] critical medium error, dev sdc, sector 8966105664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
> [56386.482798] ata25: EH complete
> [56388.841034] ata25.00: exception Emask 0x0 SAct 0x80000041 SErr 0x0 action 0x0
> [56388.848169] ata25.00: irq_stat 0x40000008
> [56388.852195] ata25.00: failed command: READ FPDMA QUEUED
> [56388.857423] ata25.00: cmd 60/08:f8:18:d0:6b/00:00:16:02:00/40 tag 31 ncq dma 4096 in
>                         res 43/40:08:18:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56388.873364] ata25.00: status: { DRDY SENSE ERR }
> [56388.877996] ata25.00: error: { UNC }
> [56388.988119] ata25.00: configured for UDMA/133
> [56388.988136] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56388.988139] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
> [56388.988141] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
> [56388.988142] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 18 00 00 00 08 00 00
> [56388.988143] critical medium error, dev sdc, sector 8966098968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56388.998324] ata25: EH complete
> [56391.522804] ata25.00: exception Emask 0x0 SAct 0x1e9c179c SErr 0x0 action 0x0
> [56391.529945] ata25.00: irq_stat 0x40000008
> [56391.533976] ata25.00: failed command: READ FPDMA QUEUED
> [56391.539212] ata25.00: cmd 60/08:d8:20:d0:6b/00:00:16:02:00/40 tag 27 ncq dma 4096 in
>                         res 43/40:08:20:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56391.555146] ata25.00: status: { DRDY SENSE ERR }
> [56391.559778] ata25.00: error: { UNC }
> [56391.695556] ata25.00: configured for UDMA/133
> [56391.695634] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56391.695637] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
> [56391.695639] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
> [56391.695642] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 20 00 00 00 08 00 00
> [56391.695643] critical medium error, dev sdc, sector 8966098976 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56391.705828] ata25: EH complete
> [56393.848207] ata25.00: exception Emask 0x0 SAct 0xf9fff8bf SErr 0x0 action 0x0
> [56393.855346] ata25.00: irq_stat 0x40000008
> [56393.859386] ata25.00: failed command: READ FPDMA QUEUED
> [56393.864624] ata25.00: cmd 60/c8:d8:48:3a:6e/02:00:16:02:00/40 tag 27 ncq dma 364544 in
>                         res 43/40:c8:08:3b:6e/00:02:16:02:00/00 Emask 0x408 (media error) <F>
> [56393.880732] ata25.00: status: { DRDY SENSE ERR }
> [56393.885357] ata25.00: error: { UNC }
> [56393.986399] ata25.00: configured for UDMA/133
> [56393.986540] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56393.986544] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
> [56393.986546] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
> [56393.986549] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6e 3a 48 00 00 02 c8 00 00
> [56393.986550] critical medium error, dev sdc, sector 8966257224 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
> [56393.996900] ata25: EH complete
> [56396.083820] md/raid:md127: read error corrected (8 sectors at 8966064160 on dm-1)
> [56396.083837] md/raid:md127: read error corrected (8 sectors at 8966064152 on dm-1)
> [56400.565346] ata25.00: exception Emask 0x0 SAct 0xfdfbf7df SErr 0x0 action 0x0
> [56400.572485] ata25.00: irq_stat 0x40000008
> [56400.576527] ata25.00: failed command: READ FPDMA QUEUED
> [56400.581758] ata25.00: cmd 60/08:60:08:3b:6e/00:00:16:02:00/40 tag 12 ncq dma 4096 in
>                         res 43/40:08:08:3b:6e/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56400.597695] ata25.00: status: { DRDY SENSE ERR }
> [56400.602329] ata25.00: error: { UNC }
> [56400.734083] ata25.00: configured for UDMA/133
> [56400.734136] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56400.734139] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
> [56400.734142] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
> [56400.734144] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 16 6e 3b 08 00 00 00 08 00 00
> [56400.734146] critical medium error, dev sdc, sector 8966257416 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56400.744333] ata25: EH complete
> [56401.189218] md/raid:md127: read error corrected (8 sectors at 8966222600 on dm-1)
> [56403.685654] ata25.00: exception Emask 0x0 SAct 0x701ffff0 SErr 0x0 action 0x0
> [56403.692794] ata25.00: irq_stat 0x40000008
> [56403.696816] ata25.00: failed command: READ FPDMA QUEUED
> [56403.702049] ata25.00: cmd 60/08:20:d8:eb:6b/00:00:16:02:00/40 tag 4 ncq dma 4096 in
>                         res 43/40:08:d8:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56403.717891] ata25.00: status: { DRDY SENSE ERR }
> [56403.722515] ata25.00: error: { UNC }
> [56403.833001] ata25.00: configured for UDMA/133
> [56403.833024] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56403.833029] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
> [56403.833032] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
> [56403.833034] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 16 6b eb d8 00 00 00 08 00 00
> [56403.833036] critical medium error, dev sdc, sector 8966106072 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56403.843247] ata25: EH complete
> [56406.373840] ata25.00: exception Emask 0x0 SAct 0x7f81fe6b SErr 0x0 action 0x0
> [56406.380972] ata25.00: irq_stat 0x40000008
> [56406.385004] ata25.00: failed command: READ FPDMA QUEUED
> [56406.390247] ata25.00: cmd 60/08:48:e0:eb:6b/00:00:16:02:00/40 tag 9 ncq dma 4096 in
>                         res 43/40:08:e0:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56406.406107] ata25.00: status: { DRDY SENSE ERR }
> [56406.410737] ata25.00: error: { UNC }
> [56406.507071] ata25.00: configured for UDMA/133
> [56406.507094] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56406.507096] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
> [56406.507098] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
> [56406.507100] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6b eb e0 00 00 00 08 00 00
> [56406.507101] critical medium error, dev sdc, sector 8966106080 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56406.517282] ata25: EH complete
> [56408.531429] ata25.00: exception Emask 0x0 SAct 0xbfff87ff SErr 0x0 action 0x0
> [56408.538690] ata25.00: irq_stat 0x40000008
> [56408.542722] ata25.00: failed command: READ FPDMA QUEUED
> [56408.547952] ata25.00: cmd 60/08:78:f8:dd:6b/00:00:16:02:00/40 tag 15 ncq dma 4096 in
>                         res 43/40:08:f8:dd:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56408.563920] ata25.00: status: { DRDY SENSE ERR }
> [56408.568559] ata25.00: error: { UNC }
> [56408.664647] ata25.00: configured for UDMA/133
> [56408.664684] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56408.664687] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
> [56408.664689] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
> [56408.664690] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 6b dd f8 00 00 00 08 00 00
> [56408.664692] critical medium error, dev sdc, sector 8966102520 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56408.674879] ata25: EH complete
> [56410.899004] md/raid:md127: read error corrected (8 sectors at 8966071256 on dm-1)
> [56410.899015] md/raid:md127: read error corrected (8 sectors at 8966071264 on dm-1)
> [56412.014769] md/raid:md127: read error corrected (8 sectors at 8966067704 on dm-1)
> [56419.657721] ata25.00: exception Emask 0x0 SAct 0x1eff1f SErr 0x0 action 0x0
> [56419.664684] ata25.00: irq_stat 0x40000008
> [56419.668729] ata25.00: failed command: READ FPDMA QUEUED
> [56419.673959] ata25.00: cmd 60/00:88:00:15:6c/08:00:16:02:00/40 tag 17 ncq dma 1048576 in
>                         res 43/40:00:78:15:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
> [56419.690151] ata25.00: status: { DRDY SENSE ERR }
> [56419.694787] ata25.00: error: { UNC }
> [56419.827430] ata25.00: configured for UDMA/133
> [56419.827504] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56419.827507] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
> [56419.827509] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
> [56419.827510] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 00 00 00 08 00 00 00
> [56419.827511] critical medium error, dev sdc, sector 8966116608 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
> [56419.837858] ata25: EH complete
> [56421.908274] ata25.00: exception Emask 0x0 SAct 0x70e1847f SErr 0x0 action 0x0
> [56421.915411] ata25.00: irq_stat 0x40000008
> [56421.919444] ata25.00: failed command: READ FPDMA QUEUED
> [56421.924674] ata25.00: cmd 60/00:00:00:1d:6c/08:00:16:02:00/40 tag 0 ncq dma 1048576 in
>                         res 43/40:00:50:23:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
> [56421.940790] ata25.00: status: { DRDY SENSE ERR }
> [56421.945421] ata25.00: error: { UNC }
> [56422.051628] ata25.00: configured for UDMA/133
> [56422.051706] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56422.051710] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
> [56422.051712] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
> [56422.051714] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 6c 1d 00 00 00 08 00 00 00
> [56422.051716] critical medium error, dev sdc, sector 8966118656 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
> [56422.062114] ata25: EH complete
> [56424.259349] ata25.00: exception Emask 0x0 SAct 0x6b1ffc08 SErr 0x0 action 0x0
> [56424.266486] ata25.00: irq_stat 0x40000008
> [56424.270517] ata25.00: failed command: READ FPDMA QUEUED
> [56424.275749] ata25.00: cmd 60/08:50:78:15:6c/00:00:16:02:00/40 tag 10 ncq dma 4096 in
>                         res 43/40:08:78:15:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56424.291690] ata25.00: status: { DRDY SENSE ERR }
> [56424.296324] ata25.00: error: { UNC }
> [56424.384160] ata25.00: configured for UDMA/133
> [56424.384177] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56424.384180] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
> [56424.384182] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
> [56424.384184] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 78 00 00 00 08 00 00
> [56424.384185] critical medium error, dev sdc, sector 8966116728 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56424.394369] ata25: EH complete
> [56424.599523] md/raid:md127: read error corrected (8 sectors at 8966081912 on dm-1)
> [56428.528870] ata25.00: exception Emask 0x0 SAct 0xff747eff SErr 0x0 action 0x0
> [56428.536009] ata25.00: irq_stat 0x40000008
> [56428.540060] ata25.00: failed command: READ FPDMA QUEUED
> [56428.545308] ata25.00: cmd 60/08:48:50:23:6c/00:00:16:02:00/40 tag 9 ncq dma 4096 in
>                         res 43/40:08:50:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56428.561598] ata25.00: status: { DRDY SENSE ERR }
> [56428.566242] ata25.00: error: { UNC }
> [56428.690982] ata25.00: configured for UDMA/133
> [56428.691017] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [56428.691020] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
> [56428.691022] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
> [56428.691023] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 50 00 00 00 08 00 00
> [56428.691025] critical medium error, dev sdc, sector 8966120272 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56428.701217] ata25: EH complete
> [56430.789563] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [56430.796699] ata25.00: irq_stat 0x40000008
> [56430.800724] ata25.00: failed command: READ FPDMA QUEUED
> [56430.805977] ata25.00: cmd 60/08:a0:58:23:6c/00:00:16:02:00/40 tag 20 ncq dma 4096 in
>                         res 43/40:08:58:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
> [56430.822085] ata25.00: status: { DRDY SENSE ERR }
> [56430.826719] ata25.00: error: { UNC }
> [56430.923535] ata25.00: configured for UDMA/133
> [56430.923604] sd 24:0:0:0: [sdc] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [56430.923607] sd 24:0:0:0: [sdc] tag#20 Sense Key : Medium Error [current] 
> [56430.923608] sd 24:0:0:0: [sdc] tag#20 Add. Sense: Unrecovered read error
> [56430.923610] sd 24:0:0:0: [sdc] tag#20 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 58 00 00 00 08 00 00
> [56430.923611] critical medium error, dev sdc, sector 8966120280 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [56430.933789] ata25: EH complete
> [56431.464686] md/raid:md127: read error corrected (8 sectors at 8966085464 on dm-1)
> [56431.464684] md/raid:md127: read error corrected (8 sectors at 8966085456 on dm-1)
> [57881.912653] ata25.00: exception Emask 0x0 SAct 0x843ff807 SErr 0x0 action 0x0
> [57881.919808] ata25.00: irq_stat 0x40000008
> [57881.923835] ata25.00: failed command: READ FPDMA QUEUED
> [57881.929074] ata25.00: cmd 60/48:58:00:65:4c/05:00:1d:02:00/40 tag 11 ncq dma 692224 in
>                         res 43/40:48:68:66:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
> [57881.945260] ata25.00: status: { DRDY SENSE ERR }
> [57881.949953] ata25.00: error: { UNC }
> [57882.041190] ata25.00: configured for UDMA/133
> [57882.041253] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=3s
> [57882.041256] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
> [57882.041258] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Data synchronization mark error
> [57882.041260] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 1d 4c 65 00 00 00 05 48 00 00
> [57882.041261] I/O error, dev sdc, sector 9081480448 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
> [57882.050685] ata25: EH complete
> [57885.724654] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [57885.731866] ata25.00: irq_stat 0x40000008
> [57885.735894] ata25.00: failed command: READ FPDMA QUEUED
> [57885.741217] ata25.00: cmd 60/08:20:68:66:4c/00:00:1d:02:00/40 tag 4 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57885.757076] ata25.00: status: { DRDY SENSE ERR }
> [57885.761721] ata25.00: error: { UNC }
> [57885.898142] ata25.00: configured for UDMA/133
> [57885.898197] ata25: EH complete
> [57888.362234] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [57888.369399] ata25.00: irq_stat 0x40000008
> [57888.373430] ata25.00: failed command: READ FPDMA QUEUED
> [57888.378665] ata25.00: cmd 60/08:60:68:66:4c/00:00:1d:02:00/40 tag 12 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57888.394601] ata25.00: status: { DRDY SENSE ERR }
> [57888.399243] ata25.00: error: { UNC }
> [57888.488931] ata25.00: configured for UDMA/133
> [57888.488992] ata25: EH complete
> [57891.013850] ata25.00: exception Emask 0x0 SAct 0xbffc00b6 SErr 0x0 action 0x0
> [57891.020998] ata25.00: irq_stat 0x40000008
> [57891.025037] ata25.00: failed command: READ FPDMA QUEUED
> [57891.030565] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57891.046568] ata25.00: status: { DRDY SENSE ERR }
> [57891.051257] ata25.00: error: { UNC }
> [57891.154641] ata25.00: configured for UDMA/133
> [57891.154667] ata25: EH complete
> [57893.666329] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [57893.673470] ata25.00: irq_stat 0x40000008
> [57893.677495] ata25.00: failed command: READ FPDMA QUEUED
> [57893.682722] ata25.00: cmd 60/08:48:68:66:4c/00:00:1d:02:00/40 tag 9 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57893.698570] ata25.00: status: { DRDY SENSE ERR }
> [57893.703198] ata25.00: error: { UNC }
> [57893.795404] ata25.00: configured for UDMA/133
> [57893.795458] ata25: EH complete
> [57896.285432] ata25.00: exception Emask 0x0 SAct 0x3ffc0020 SErr 0x0 action 0x0
> [57896.292651] ata25.00: irq_stat 0x40000008
> [57896.296685] ata25.00: failed command: READ FPDMA QUEUED
> [57896.301920] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57896.317965] ata25.00: status: { DRDY SENSE ERR }
> [57896.322722] ata25.00: error: { UNC }
> [57896.419495] ata25.00: configured for UDMA/133
> [57896.419516] ata25: EH complete
> [57898.960626] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
> [57898.967775] ata25.00: irq_stat 0x40000008
> [57898.971808] ata25.00: failed command: READ FPDMA QUEUED
> [57898.977066] ata25.00: cmd 60/08:70:68:66:4c/00:00:1d:02:00/40 tag 14 ncq dma 4096 in
>                         res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57898.993042] ata25.00: status: { DRDY SENSE ERR }
> [57898.997677] ata25.00: error: { UNC }
> [57899.085209] ata25.00: configured for UDMA/133
> [57899.085258] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
> [57899.085260] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
> [57899.085262] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Data synchronization mark error
> [57899.085264] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 1d 4c 66 68 00 00 00 08 00 00
> [57899.085265] I/O error, dev sdc, sector 9081480808 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [57899.094430] ata25: EH complete
> [57900.557539] md/raid:md127: read error corrected (8 sectors at 9081445992 on dm-1)
> [57903.266393] ata25.00: exception Emask 0x0 SAct 0x7fe0c3ff SErr 0x0 action 0x0
> [57903.273531] ata25.00: irq_stat 0x40000008
> [57903.277559] ata25.00: failed command: READ FPDMA QUEUED
> [57903.282790] ata25.00: cmd 60/10:a8:00:70:4c/05:00:1d:02:00/40 tag 21 ncq dma 663552 in
>                         res 43/40:10:40:74:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
> [57903.298899] ata25.00: status: { DRDY SENSE ERR }
> [57903.303523] ata25.00: error: { UNC }
> [57903.433667] ata25.00: configured for UDMA/133
> [57903.433735] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
> [57903.433738] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
> [57903.433740] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Data synchronization mark error
> [57903.433742] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 1d 4c 70 00 00 00 05 10 00 00
> [57903.433743] I/O error, dev sdc, sector 9081483264 op 0x0:(READ) flags 0x80700 phys_seg 162 prio class 0
> [57903.443152] ata25: EH complete
> [57906.791480] ata25.00: exception Emask 0x0 SAct 0x7f0fffff SErr 0x0 action 0x0
> [57906.798705] ata25.00: irq_stat 0x40000008
> [57906.802742] ata25.00: failed command: READ FPDMA QUEUED
> [57906.807980] ata25.00: cmd 60/08:c0:40:74:4c/00:00:1d:02:00/40 tag 24 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57906.823914] ata25.00: status: { DRDY SENSE ERR }
> [57906.828540] ata25.00: error: { UNC }
> [57906.990783] ata25.00: configured for UDMA/133
> [57906.990830] ata25: EH complete
> [57909.373378] ata25.00: exception Emask 0x0 SAct 0x7800000f SErr 0x0 action 0x0
> [57909.380521] ata25.00: irq_stat 0x40000008
> [57909.384548] ata25.00: failed command: READ FPDMA QUEUED
> [57909.389785] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57909.405722] ata25.00: status: { DRDY SENSE ERR }
> [57909.410352] ata25.00: error: { UNC }
> [57909.548237] ata25.00: configured for UDMA/133
> [57909.548284] ata25: EH complete
> [57911.907322] ata25.00: exception Emask 0x0 SAct 0x7003feff SErr 0x0 action 0x0
> [57911.914465] ata25.00: irq_stat 0x40000008
> [57911.918490] ata25.00: failed command: READ FPDMA QUEUED
> [57911.923726] ata25.00: cmd 60/08:e8:40:74:4c/00:00:1d:02:00/40 tag 29 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57911.939665] ata25.00: status: { DRDY SENSE ERR }
> [57911.944297] ata25.00: error: { UNC }
> [57912.065349] ata25.00: configured for UDMA/133
> [57912.065458] ata25: EH complete
> [57914.457377] ata25.00: exception Emask 0x0 SAct 0xfff07fff SErr 0x0 action 0x0
> [57914.464516] ata25.00: irq_stat 0x40000008
> [57914.468545] ata25.00: failed command: READ FPDMA QUEUED
> [57914.473780] ata25.00: cmd 60/08:50:40:74:4c/00:00:1d:02:00/40 tag 10 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57914.489711] ata25.00: status: { DRDY SENSE ERR }
> [57914.494339] ata25.00: error: { UNC }
> [57914.614467] ata25.00: configured for UDMA/133
> [57914.614561] ata25: EH complete
> [57917.090724] ata25.00: exception Emask 0x0 SAct 0x7ffc3fff SErr 0x0 action 0x0
> [57917.097865] ata25.00: irq_stat 0x40000008
> [57917.101902] ata25.00: failed command: READ FPDMA QUEUED
> [57917.107138] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57917.123080] ata25.00: status: { DRDY SENSE ERR }
> [57917.127815] ata25.00: error: { UNC }
> [57917.246905] ata25.00: configured for UDMA/133
> [57917.247000] ata25: EH complete
> [57919.617397] ata25.00: exception Emask 0x0 SAct 0xfe7fc7ff SErr 0x0 action 0x0
> [57919.624542] ata25.00: irq_stat 0x40000008
> [57919.628570] ata25.00: failed command: READ FPDMA QUEUED
> [57919.633802] ata25.00: cmd 60/08:40:40:74:4c/00:00:1d:02:00/40 tag 8 ncq dma 4096 in
>                         res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
> [57919.649663] ata25.00: status: { DRDY SENSE ERR }
> [57919.654296] ata25.00: error: { UNC }
> [57919.769685] ata25.00: configured for UDMA/133
> [57919.769797] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
> [57919.769800] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
> [57919.769803] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Data synchronization mark error
> [57919.769805] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 1d 4c 74 40 00 00 00 08 00 00
> [57919.769807] I/O error, dev sdc, sector 9081484352 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
> [57919.778997] ata25: EH complete
> [57919.882467] md/raid:md127: read error corrected (8 sectors at 9081449536 on dm-1)
> [64266.546763] INFO: task btrfs-transacti:8202 blocked for more than 122 seconds.
> [64266.553999]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.560013] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.567852] task:btrfs-transacti state:D stack:0     pid:8202  tgid:8202  ppid:2      flags:0x00004000
> [64266.567855] Call Trace:
> [64266.567856]  <TASK>
> [64266.567858]  __schedule+0x3d5/0x1520
> [64266.567865]  schedule+0x27/0xf0
> [64266.567867]  wait_for_commit+0x11f/0x1d0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.567898]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.567902]  btrfs_commit_transaction+0xbb6/0xc80 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.567924]  transaction_kthread+0x159/0x1c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.567943]  ? __pfx_transaction_kthread+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.567958]  kthread+0xcf/0x100
> [64266.567960]  ? __pfx_kthread+0x10/0x10
> [64266.567962]  ret_from_fork+0x31/0x50
> [64266.567964]  ? __pfx_kthread+0x10/0x10
> [64266.567966]  ret_from_fork_asm+0x1a/0x30
> [64266.567969]  </TASK>
> [64266.567998] INFO: task kworker/u64:57:96535 blocked for more than 122 seconds.
> [64266.575240]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.581256] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.589077] task:kworker/u64:57  state:D stack:0     pid:96535 tgid:96535 ppid:2      flags:0x00004000
> [64266.589080] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.589084] Call Trace:
> [64266.589085]  <TASK>
> [64266.589086]  __schedule+0x3d5/0x1520
> [64266.589091]  schedule+0x27/0xf0
> [64266.589093]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.589100]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.589102]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.589107]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.589109]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589133]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.589139]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.589146]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.589147]  __submit_bio+0x168/0x240
> [64266.589151]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.589153]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589176]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.589179]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589198]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589218]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589235]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.589237]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589254]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589271]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589293]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589310]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.589325]  do_writepages+0x7e/0x270
> [64266.589329]  __writeback_single_inode+0x41/0x340
> [64266.589331]  ? wbc_detach_inode+0x116/0x240
> [64266.589333]  writeback_sb_inodes+0x21c/0x4f0
> [64266.589341]  __writeback_inodes_wb+0x4c/0xf0
> [64266.589343]  wb_writeback+0x193/0x310
> [64266.589347]  wb_workfn+0x2a5/0x440
> [64266.589350]  process_one_work+0x17b/0x330
> [64266.589352]  worker_thread+0x2e2/0x410
> [64266.589354]  ? __pfx_worker_thread+0x10/0x10
> [64266.589355]  kthread+0xcf/0x100
> [64266.589357]  ? __pfx_kthread+0x10/0x10
> [64266.589359]  ret_from_fork+0x31/0x50
> [64266.589361]  ? __pfx_kthread+0x10/0x10
> [64266.589362]  ret_from_fork_asm+0x1a/0x30
> [64266.589366]  </TASK>
> [64266.589367] INFO: task kworker/u64:80:96615 blocked for more than 122 seconds.
> [64266.596592]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.602597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.610419] task:kworker/u64:80  state:D stack:0     pid:96615 tgid:96615 ppid:2      flags:0x00004000
> [64266.610422] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.610424] Call Trace:
> [64266.610425]  <TASK>
> [64266.610426]  __schedule+0x3d5/0x1520
> [64266.610430]  schedule+0x27/0xf0
> [64266.610432]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.610436]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.610439]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.610443]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.610444]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610461]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.610465]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.610469]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.610471]  __submit_bio+0x168/0x240
> [64266.610473]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.610476]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610495]  ? __extent_writepage_io+0x21f/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610511]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610529]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610545]  extent_write_cache_pages+0x397/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610565]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610581]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.610597]  do_writepages+0x7e/0x270
> [64266.610600]  __writeback_single_inode+0x41/0x340
> [64266.610601]  ? wbc_detach_inode+0x116/0x240
> [64266.610604]  writeback_sb_inodes+0x21c/0x4f0
> [64266.610611]  __writeback_inodes_wb+0x4c/0xf0
> [64266.610614]  wb_writeback+0x193/0x310
> [64266.610617]  wb_workfn+0xc4/0x440
> [64266.610620]  process_one_work+0x17b/0x330
> [64266.610622]  worker_thread+0x2e2/0x410
> [64266.610624]  ? __pfx_worker_thread+0x10/0x10
> [64266.610625]  kthread+0xcf/0x100
> [64266.610627]  ? __pfx_kthread+0x10/0x10
> [64266.610629]  ret_from_fork+0x31/0x50
> [64266.610630]  ? __pfx_kthread+0x10/0x10
> [64266.610632]  ret_from_fork_asm+0x1a/0x30
> [64266.610635]  </TASK>
> [64266.610636] INFO: task kworker/u64:102:96624 blocked for more than 122 seconds.
> [64266.617953]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.623962] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.631784] task:kworker/u64:102 state:D stack:0     pid:96624 tgid:96624 ppid:2      flags:0x00004000
> [64266.631786] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.631789] Call Trace:
> [64266.631790]  <TASK>
> [64266.631791]  __schedule+0x3d5/0x1520
> [64266.631795]  schedule+0x27/0xf0
> [64266.631797]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.631800]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.631803]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.631807]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.631808]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631824]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.631828]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.631832]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.631834]  __submit_bio+0x168/0x240
> [64266.631836]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.631839]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631857]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.631860]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631877]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631893]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631908]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.631910]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631926]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631942]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631962]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631978]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.631993]  do_writepages+0x7e/0x270
> [64266.631996]  __writeback_single_inode+0x41/0x340
> [64266.631998]  ? wbc_detach_inode+0x116/0x240
> [64266.632000]  writeback_sb_inodes+0x21c/0x4f0
> [64266.632007]  __writeback_inodes_wb+0x4c/0xf0
> [64266.632010]  wb_writeback+0x193/0x310
> [64266.632013]  wb_workfn+0x2a5/0x440
> [64266.632016]  process_one_work+0x17b/0x330
> [64266.632018]  worker_thread+0x2e2/0x410
> [64266.632020]  ? __pfx_worker_thread+0x10/0x10
> [64266.632021]  kthread+0xcf/0x100
> [64266.632023]  ? __pfx_kthread+0x10/0x10
> [64266.632025]  ret_from_fork+0x31/0x50
> [64266.632026]  ? __pfx_kthread+0x10/0x10
> [64266.632028]  ret_from_fork_asm+0x1a/0x30
> [64266.632031]  </TASK>
> [64266.632032] INFO: task kworker/u64:25:100536 blocked for more than 122 seconds.
> [64266.639341]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.645348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.653171] task:kworker/u64:25  state:D stack:0     pid:100536 tgid:100536 ppid:2      flags:0x00004000
> [64266.653173] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.653175] Call Trace:
> [64266.653176]  <TASK>
> [64266.653177]  __schedule+0x3d5/0x1520
> [64266.653181]  schedule+0x27/0xf0
> [64266.653183]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.653187]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.653189]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.653193]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.653194]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653210]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.653214]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.653218]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.653220]  __submit_bio+0x168/0x240
> [64266.653222]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.653225]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653244]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.653246]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653264]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653279]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653295]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.653297]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653312]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653328]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653348]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653364]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.653381]  do_writepages+0x7e/0x270
> [64266.653383]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.653385]  ? select_task_rq_fair+0x7f8/0x1da0
> [64266.653388]  __writeback_single_inode+0x41/0x340
> [64266.653390]  ? wbc_detach_inode+0x116/0x240
> [64266.653392]  writeback_sb_inodes+0x21c/0x4f0
> [64266.653399]  __writeback_inodes_wb+0x4c/0xf0
> [64266.653402]  wb_writeback+0x193/0x310
> [64266.653405]  wb_workfn+0xc4/0x440
> [64266.653408]  process_one_work+0x17b/0x330
> [64266.653410]  worker_thread+0x2e2/0x410
> [64266.653412]  ? __pfx_worker_thread+0x10/0x10
> [64266.653413]  kthread+0xcf/0x100
> [64266.653415]  ? __pfx_kthread+0x10/0x10
> [64266.653417]  ret_from_fork+0x31/0x50
> [64266.653418]  ? __pfx_kthread+0x10/0x10
> [64266.653420]  ret_from_fork_asm+0x1a/0x30
> [64266.653423]  </TASK>
> [64266.653424] INFO: task kworker/u64:52:101215 blocked for more than 122 seconds.
> [64266.660726]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.666738] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.674556] task:kworker/u64:52  state:D stack:0     pid:101215 tgid:101215 ppid:2      flags:0x00004000
> [64266.674558] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.674560] Call Trace:
> [64266.674561]  <TASK>
> [64266.674562]  __schedule+0x3d5/0x1520
> [64266.674566]  schedule+0x27/0xf0
> [64266.674568]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.674572]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.674575]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.674579]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.674580]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674596]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.674599]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.674604]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.674605]  __submit_bio+0x168/0x240
> [64266.674608]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.674610]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674629]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.674631]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674649]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674665]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674680]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.674682]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674697]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674713]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674733]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674748]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.674764]  do_writepages+0x7e/0x270
> [64266.674767]  __writeback_single_inode+0x41/0x340
> [64266.674769]  ? wbc_detach_inode+0x116/0x240
> [64266.674771]  writeback_sb_inodes+0x21c/0x4f0
> [64266.674778]  __writeback_inodes_wb+0x4c/0xf0
> [64266.674780]  wb_writeback+0x193/0x310
> [64266.674784]  wb_workfn+0x2a5/0x440
> [64266.674787]  process_one_work+0x17b/0x330
> [64266.674789]  worker_thread+0x2e2/0x410
> [64266.674791]  ? __pfx_worker_thread+0x10/0x10
> [64266.674792]  kthread+0xcf/0x100
> [64266.674794]  ? __pfx_kthread+0x10/0x10
> [64266.674795]  ret_from_fork+0x31/0x50
> [64266.674797]  ? __pfx_kthread+0x10/0x10
> [64266.674799]  ret_from_fork_asm+0x1a/0x30
> [64266.674802]  </TASK>
> [64266.674803] INFO: task kworker/u64:71:101293 blocked for more than 123 seconds.
> [64266.682113]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.688120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.695943] task:kworker/u64:71  state:D stack:0     pid:101293 tgid:101293 ppid:2      flags:0x00004000
> [64266.695945] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.695948] Call Trace:
> [64266.695949]  <TASK>
> [64266.695950]  __schedule+0x3d5/0x1520
> [64266.695954]  schedule+0x27/0xf0
> [64266.695956]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.695960]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.695962]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.695966]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.695968]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.695969]  ? blk_cgroup_bio_start+0x8c/0xd0
> [64266.695973]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.695978]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.695979]  __submit_bio+0x168/0x240
> [64266.695982]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.695985]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696003]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.696005]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696023]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696038]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696054]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.696056]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696071]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696087]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696107]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696122]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.696138]  do_writepages+0x7e/0x270
> [64266.696141]  __writeback_single_inode+0x41/0x340
> [64266.696143]  ? wbc_detach_inode+0x116/0x240
> [64266.696145]  writeback_sb_inodes+0x21c/0x4f0
> [64266.696152]  __writeback_inodes_wb+0x4c/0xf0
> [64266.696155]  wb_writeback+0x193/0x310
> [64266.696158]  wb_workfn+0x34b/0x440
> [64266.696161]  process_one_work+0x17b/0x330
> [64266.696163]  worker_thread+0x2e2/0x410
> [64266.696165]  ? __pfx_worker_thread+0x10/0x10
> [64266.696166]  kthread+0xcf/0x100
> [64266.696168]  ? __pfx_kthread+0x10/0x10
> [64266.696169]  ret_from_fork+0x31/0x50
> [64266.696171]  ? __pfx_kthread+0x10/0x10
> [64266.696173]  ret_from_fork_asm+0x1a/0x30
> [64266.696176]  </TASK>
> [64266.696177] INFO: task kworker/u64:83:101621 blocked for more than 123 seconds.
> [64266.703487]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.709494] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.717308] task:kworker/u64:83  state:D stack:0     pid:101621 tgid:101621 ppid:2      flags:0x00004000
> [64266.717310] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.717313] Call Trace:
> [64266.717313]  <TASK>
> [64266.717315]  __schedule+0x3d5/0x1520
> [64266.717319]  schedule+0x27/0xf0
> [64266.717320]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.717324]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.717327]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.717331]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.717332]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717348]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.717351]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.717356]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.717357]  __submit_bio+0x168/0x240
> [64266.717360]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.717363]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717381]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.717383]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717401]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717416]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717432]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.717433]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717449]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717465]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717485]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717501]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.717516]  do_writepages+0x7e/0x270
> [64266.717518]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.717519]  ? autoremove_wake_function+0x15/0x60
> [64266.717522]  __writeback_single_inode+0x41/0x340
> [64266.717523]  ? wbc_detach_inode+0x116/0x240
> [64266.717526]  writeback_sb_inodes+0x21c/0x4f0
> [64266.717533]  __writeback_inodes_wb+0x4c/0xf0
> [64266.717535]  wb_writeback+0x193/0x310
> [64266.717539]  wb_workfn+0x2a5/0x440
> [64266.717542]  process_one_work+0x17b/0x330
> [64266.717544]  worker_thread+0x2e2/0x410
> [64266.717546]  ? __pfx_worker_thread+0x10/0x10
> [64266.717547]  kthread+0xcf/0x100
> [64266.717548]  ? __pfx_kthread+0x10/0x10
> [64266.717550]  ret_from_fork+0x31/0x50
> [64266.717552]  ? __pfx_kthread+0x10/0x10
> [64266.717554]  ret_from_fork_asm+0x1a/0x30
> [64266.717557]  </TASK>
> [64266.717558] INFO: task kworker/u64:88:101623 blocked for more than 123 seconds.
> [64266.724865]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.730873] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.738691] task:kworker/u64:88  state:D stack:0     pid:101623 tgid:101623 ppid:2      flags:0x00004000
> [64266.738693] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.738695] Call Trace:
> [64266.738696]  <TASK>
> [64266.738697]  __schedule+0x3d5/0x1520
> [64266.738701]  schedule+0x27/0xf0
> [64266.738703]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.738707]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.738709]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.738714]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.738715]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738731]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.738734]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.738739]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.738740]  __submit_bio+0x168/0x240
> [64266.738743]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.738745]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738764]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.738766]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738788]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738804]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738819]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.738821]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738836]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738852]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738872]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738887]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.738902]  do_writepages+0x7e/0x270
> [64266.738905]  __writeback_single_inode+0x41/0x340
> [64266.738907]  ? wbc_detach_inode+0x116/0x240
> [64266.738909]  writeback_sb_inodes+0x21c/0x4f0
> [64266.738917]  __writeback_inodes_wb+0x4c/0xf0
> [64266.738919]  wb_writeback+0x193/0x310
> [64266.738922]  wb_workfn+0x2a5/0x440
> [64266.738926]  process_one_work+0x17b/0x330
> [64266.738928]  worker_thread+0x2e2/0x410
> [64266.738929]  ? __pfx_worker_thread+0x10/0x10
> [64266.738931]  kthread+0xcf/0x100
> [64266.738932]  ? __pfx_kthread+0x10/0x10
> [64266.738934]  ret_from_fork+0x31/0x50
> [64266.738936]  ? __pfx_kthread+0x10/0x10
> [64266.738937]  ret_from_fork_asm+0x1a/0x30
> [64266.738941]  </TASK>
> [64266.738942] INFO: task kworker/u64:108:102161 blocked for more than 123 seconds.
> [64266.746331]       Tainted: P           OE      6.10.5-arch1-1 #1
> [64266.752336] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [64266.760152] task:kworker/u64:108 state:D stack:0     pid:102161 tgid:102161 ppid:2      flags:0x00004000
> [64266.760154] Workqueue: writeback wb_workfn (flush-btrfs-1)
> [64266.760156] Call Trace:
> [64266.760157]  <TASK>
> [64266.760158]  __schedule+0x3d5/0x1520
> [64266.760162]  schedule+0x27/0xf0
> [64266.760164]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.760168]  ? __pfx_autoremove_wake_function+0x10/0x10
> [64266.760170]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
> [64266.760175]  ? __pfx_woken_wake_function+0x10/0x10
> [64266.760176]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.760177]  ? blk_cgroup_bio_start+0x8c/0xd0
> [64266.760181]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
> [64266.760185]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.760187]  __submit_bio+0x168/0x240
> [64266.760189]  submit_bio_noacct_nocheck+0x197/0x3e0
> [64266.760192]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760210]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.760213]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760230]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760246]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760261]  ? folio_clear_dirty_for_io+0x121/0x190
> [64266.760263]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760278]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760294]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760314]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760330]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
> [64266.760345]  do_writepages+0x7e/0x270
> [64266.760346]  ? srso_alias_return_thunk+0x5/0xfbef5
> [64266.760349]  __writeback_single_inode+0x41/0x340
> [64266.760351]  ? wbc_detach_inode+0x116/0x240
> [64266.760353]  writeback_sb_inodes+0x21c/0x4f0
> [64266.760360]  __writeback_inodes_wb+0x4c/0xf0
> [64266.760363]  wb_writeback+0x193/0x310
> [64266.760366]  wb_workfn+0x2a5/0x440
> [64266.760369]  process_one_work+0x17b/0x330
> [64266.760371]  worker_thread+0x2e2/0x410
> [64266.760373]  ? __pfx_worker_thread+0x10/0x10
> [64266.760374]  kthread+0xcf/0x100
> [64266.760376]  ? __pfx_kthread+0x10/0x10
> [64266.760378]  ret_from_fork+0x31/0x50
> [64266.760380]  ? __pfx_kthread+0x10/0x10
> [64266.760381]  ret_from_fork_asm+0x1a/0x30
> [64266.760385]  </TASK>
> [64266.760385] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> [85040.962654] systemd-coredump[137770]: Process 1259 (systemd-journal) of user 0 terminated abnormally with signal 6/ABRT, processing...
> sda           8:0    0  14.6T  0 disk  
> └─sda1        8:1    0  14.5T  0 part  
>   └─c07     254:3    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdb           8:16   0  14.6T  0 disk  
> └─sdb1        8:17   0  14.5T  0 part  
>   └─c09     254:4    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdc           8:32   0  14.6T  0 disk  
> └─sdc1        8:33   0  14.5T  0 part  
>   └─c05     254:1    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdd           8:48   0  14.6T  0 disk  
> └─sdd1        8:49   0  14.5T  0 part  
>   └─c10     254:9    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sde           8:64   0  14.6T  0 disk  
> └─sde1        8:65   0  14.5T  0 part  
>   └─c06     254:10   0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdf           8:80   0  14.6T  0 disk  
> └─sdf1        8:81   0  14.5T  0 part  
>   └─c08     254:8    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdg           8:96   0  14.6T  0 disk  
> └─sdg1        8:97   0  14.5T  0 part  
>   └─c03     254:7    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdh           8:112  0  14.6T  0 disk  
> └─sdh1        8:113  0  14.5T  0 part  
>   └─c04     254:6    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdi           8:128  0  14.6T  0 disk  
> └─sdi1        8:129  0  14.5T  0 part  
>   └─c01     254:2    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> sdj           8:144  0  14.6T  0 disk  
> └─sdj1        8:145  0  14.5T  0 part  
>   └─c02     254:5    0  14.5T  0 crypt 
>     └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
> [root@coldnas ~]# cat /proc/mdstat 
> Personalities : [raid6] [raid5] [raid4] 
> md127 : active raid6 dm-10[5] dm-9[9] dm-8[7] dm-7[2] dm-6[3] dm-5[1] dm-4[8] dm-3[6] dm-2[0] dm-1[4]
>       124972269568 blocks super 1.2 level 6, 4096k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
>       bitmap: 3/117 pages [12KB], 65536KB chunk

> unused devices: <none>
> ps aux | grep "        D"
> root        8202  0.2  0.0      0     0 ?        D    Aug18   4:11 [btrfs-transaction]
> hasi        9117  0.4  0.0   4544  2628 ?        Ds   Aug18   6:33 /usr/lib/ssh/sftp-server
> hasi        9143  0.4  0.0   4572  2756 ?        Ds   Aug18   6:37 /usr/lib/ssh/sftp-server
> hasi        9180  1.1  0.0   4452  2796 ?        Ds   Aug18  16:56 /usr/lib/ssh/sftp-server
> hasi       30949  0.4  0.0   4488  2556 ?        Ds   Aug18   4:55 /usr/lib/ssh/sftp-server
> hasi       31971  0.3  0.0   4512  2628 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
> hasi       32277  0.3  0.0   4592  2580 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
> hasi       32379  0.3  0.0   4540  2692 ?        Ds   Aug18   3:58 /usr/lib/ssh/sftp-server
> hasi       32694  0.3  0.0   4536  2976 ?        Ds   Aug18   3:54 /usr/lib/ssh/sftp-server
> hasi       34038  0.2  0.0   4536  2736 ?        Ds   Aug18   2:59 /usr/lib/ssh/sftp-server
> hasi       35651  0.2  0.0   4568  2556 ?        Ds   01:18   1:56 /usr/lib/ssh/sftp-server
> hasi       35767  0.2  0.0   4604  2624 ?        Ds   01:21   2:03 /usr/lib/ssh/sftp-server
> hasi       35771  0.2  0.0   4464  2556 ?        Ds   01:21   2:07 /usr/lib/ssh/sftp-server
> hasi       35816  0.2  0.0   4512  2864 ?        Ds   01:22   2:07 /usr/lib/ssh/sftp-server
> hasi       36497  0.2  0.0   4636  2864 ?        Ds   01:37   2:01 /usr/lib/ssh/sftp-server
> hasi       36504  0.2  0.0   4548  2692 ?        Ds   01:37   2:06 /usr/lib/ssh/sftp-server
> hasi       36511  0.2  0.0   4540  2864 ?        Ds   01:37   1:48 /usr/lib/ssh/sftp-server
> hasi       43900  0.1  0.0   4456  2568 ?        Ds   02:47   1:36 /usr/lib/ssh/sftp-server
> hasi       55877  0.1  0.0   4476  2756 ?        Ds   03:22   1:17 /usr/lib/ssh/sftp-server
> hasi       55917  0.1  0.0   4516  2848 ?        Ds   03:24   1:16 /usr/lib/ssh/sftp-server
> hasi       56055  0.1  0.0   4516  2456 ?        Ds   03:27   1:21 /usr/lib/ssh/sftp-server
> hasi       56075  0.1  0.0   4380  2584 ?        Ds   03:28   1:26 /usr/lib/ssh/sftp-server
> hasi       56086  0.1  0.0   4512  2712 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
> hasi       56090  0.1  0.0   4556  2928 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
> hasi       56231  0.1  0.0   4508  2752 ?        Ds   03:29   1:15 /usr/lib/ssh/sftp-server
> hasi       57837  0.1  0.0   4408  2500 ?        Ds   03:33   1:15 /usr/lib/ssh/sftp-server
> hasi       57840  0.1  0.0   4460  2944 ?        Ds   03:33   1:12 /usr/lib/ssh/sftp-server
> hasi       85960  0.1  0.0   4552  2816 ?        Ds   05:00   1:03 /usr/lib/ssh/sftp-server
> hasi       85967  0.1  0.0   4508  2648 ?        Ds   05:00   0:41 /usr/lib/ssh/sftp-server
> hasi       85980  0.1  0.0   4520  2568 ?        Ds   05:00   0:55 /usr/lib/ssh/sftp-server
> hasi       85985  0.1  0.0   4496  2784 ?        Ds   05:00   0:53 /usr/lib/ssh/sftp-server
> root       96535  0.8  0.0      0     0 ?        D    05:35   5:46 [kworker/u64:57+flush-btrfs-1]
> root       96615  0.6  0.0      0     0 ?        D    05:35   4:33 [kworker/u64:80+flush-btrfs-1]
> root       96624  0.6  0.0      0     0 ?        D    05:35   4:31 [kworker/u64:102+flush-btrfs-1]
> root      100536  0.6  0.0      0     0 ?        D    05:53   4:25 [kworker/u64:25+flush-btrfs-1]
> root      101215  0.6  0.0      0     0 ?        D    05:57   3:56 [kworker/u64:52+flush-btrfs-1]
> root      101293  0.7  0.0      0     0 ?        D    05:57   4:53 [kworker/u64:71+flush-btrfs-1]
> root      101621  0.6  0.0      0     0 ?        D    05:59   3:56 [kworker/u64:83+flush-btrfs-1]
> root      101623  0.6  0.0      0     0 ?        D    05:59   3:57 [kworker/u64:88+flush-btrfs-1]
> root      102161  0.6  0.0      0     0 ?        D    06:01   3:55 [kworker/u64:108+flush-btrfs-1]
> root      103370  0.5  0.0      0     0 ?        D    06:08   3:43 [kworker/u64:144+flush-btrfs-1]
> root      108317  0.6  0.0      0     0 ?        D    06:30   3:48 [kworker/u64:24+flush-btrfs-1]
> root      109537  0.6  0.0      0     0 ?        D    06:37   3:40 [kworker/u64:4+flush-btrfs-1]
> root      109550  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:26+flush-btrfs-1]
> root      109553  0.6  0.0      0     0 ?        D    06:37   3:41 [kworker/u64:32+flush-btrfs-1]
> root      109555  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:34+flush-btrfs-1]
> root      109559  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:42+flush-btrfs-1]
> root      109564  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:48+flush-btrfs-1]
> root      115503  0.2  0.0      0     0 ?        D    07:37   1:26 [kworker/u64:6+flush-btrfs-1]
> root      118242  0.1  0.0      0     0 ?        D    08:08   0:47 [kworker/u64:68+flush-btrfs-1]
> root      119894  0.1  0.0      0     0 ?        D    08:22   0:36 [kworker/u64:14+flush-btrfs-1]
> root      120633  0.1  0.0      0     0 ?        D    08:28   0:43 [kworker/u64:5+flush-btrfs-1]
> root      120638  0.0  0.0      0     0 ?        D    08:28   0:20 [kworker/u64:23+flush-btrfs-1]
> root      120642  0.0  0.0      0     0 ?        D    08:28   0:28 [kworker/u64:35+flush-btrfs-1]
> root      122280  0.0  0.0      0     0 ?        D    08:36   0:20 [kworker/u64:0+flush-btrfs-1]
> root      122669  0.0  0.0      0     0 ?        D    08:39   0:11 [kworker/u64:79+flush-btrfs-1]
> root      123019  0.0  0.0      0     0 ?        D    08:42   0:27 [kworker/u64:85+blkcg_punt_bio]
> root      123028  0.0  0.0      0     0 ?        D    08:42   0:21 [kworker/u64:95+flush-btrfs-1]
> root      124094  0.0  0.0      0     0 ?        D    08:51   0:08 [kworker/u64:29+flush-btrfs-1]
> root      124101  0.0  0.0      0     0 ?        D    08:51   0:20 [kworker/u64:91+flush-btrfs-1]
> root      124177  0.0  0.0      0     0 ?        D    08:51   0:19 [kworker/u64:106+flush-btrfs-1]
> root      124418  0.0  0.0      0     0 ?        D    08:53   0:20 [kworker/u64:114+flush-btrfs-1]
> root      124423  0.0  0.0      0     0 ?        D    08:53   0:15 [kworker/u64:119+flush-btrfs-1]
> root      124428  0.0  0.0      0     0 ?        D    08:53   0:13 [kworker/u64:124+flush-btrfs-1]
> root      124628  0.0  0.0      0     0 ?        D    08:55   0:22 [kworker/u64:132+flush-btrfs-1]
> root      124629  0.0  0.0      0     0 ?        D    08:55   0:20 [kworker/u64:133+flush-btrfs-1]
> root      124630  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:134+flush-btrfs-1]
> root      124633  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:137+flush-btrfs-1]
> root      124636  0.0  0.0      0     0 ?        D    08:55   0:07 [kworker/u64:140+blkcg_punt_bio]
> root      124637  0.0  0.0      0     0 ?        D    08:55   0:23 [kworker/u64:141+flush-btrfs-1]
> work      125055  0.0  0.0 156020 22868 ?        Dl   08:59   0:15 smbd: client [192.168.67.33]
> root      126064  0.0  0.0      0     0 ?        D    09:07   0:18 [kworker/u64:28+flush-btrfs-1]
> hasi      127483  0.0  0.0   2604  1776 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
> hasi      127494  0.0  0.0   2604  1704 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
> hasi      127501  0.0  0.0   2604  1688 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
> hasi      127508  0.0  0.0   2604  1772 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
> hasi      127520  0.0  0.0   2604  1804 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
> hasi      127527  0.0  0.0   2604  1864 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
> hasi      127543  0.0  0.0   2604  1732 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
> hasi      127551  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
> hasi      127558  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
> hasi      127565  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
> hasi      127572  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
> hasi      127579  0.0  0.0   2604  1864 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
> hasi      127600  0.0  0.0   2604  1688 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
> hasi      127608  0.0  0.0   2604  1700 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
> hasi      127617  0.0  0.0   2604  1732 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
> hasi      127624  0.0  0.0   2604  1728 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
> hasi      127631  0.0  0.0   2604  1908 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
> hasi      127646  0.0  0.0   2604  1596 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
> hasi      127654  0.0  0.0   2604  2044 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
> hasi      127664  0.0  0.0   2604  1732 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
> hasi      127676  0.0  0.0   2604  1752 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
> hasi      127683  0.0  0.0   2604  1728 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
> hasi      127701  0.0  0.0   2604  1776 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
> hasi      127709  0.0  0.0   2604  1688 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
> hasi      127716  0.0  0.0   2604  1872 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
> hasi      127723  0.0  0.0   2604  1700 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
> hasi      127730  0.0  0.0   2736  1960 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
> hasi      127741  0.0  0.0   2604  1844 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
> root      127746  0.0  0.0      0     0 ?        D    09:39   0:00 [kworker/u64:1+flush-btrfs-1]
> hasi      127760  0.0  0.0   2604  1704 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
> hasi      127768  0.0  0.0   2604  1872 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
> hasi      127777  0.0  0.0   2604  1944 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
> hasi      127784  0.0  0.0   2604  1732 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
> hasi      127791  0.0  0.0   2604  1712 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
> hasi      127803  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
> hasi      127811  0.0  0.0   2604  1596 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
> hasi      127818  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
> hasi      127830  0.0  0.0   2604  1776 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
> hasi      127837  0.0  0.0   2604  1772 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
> hasi      127848  0.0  0.0   2604  1728 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
> hasi      127856  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
> hasi      127863  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
> hasi      127870  0.0  0.0   2604  1904 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
> hasi      127878  0.0  0.0   2604  1700 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
> hasi      127885  0.0  0.0   2604  1716 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
> hasi      127899  0.0  0.0   2604  1864 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
> root      127900  0.0  0.0      0     0 ?        D    09:46   0:00 [kworker/u64:2+flush-btrfs-1]
> hasi      127908  0.0  0.0   2604  1716 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
> hasi      127917  0.0  0.0   2604  2072 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
> hasi      127929  0.0  0.0   2604  1732 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
> hasi      127936  0.0  0.0   2604  2000 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
> hasi      127948  0.0  0.0   2604  1776 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
> hasi      127956  0.0  0.0   2604  1688 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
> root      127957  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:3+flush-btrfs-1]
> root      127958  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:7+flush-btrfs-1]
> hasi      127965  0.0  0.0   2604  1824 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
> hasi      127980  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
> hasi      127987  0.0  0.0   2604  2016 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
> hasi      128005  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
> hasi      128013  0.0  0.0   2604  1752 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
> hasi      128020  0.0  0.0   2604  1700 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
> hasi      128027  0.0  0.0   2604  1732 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
> hasi      128034  0.0  0.0   2604  1856 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
> hasi      128043  0.0  0.0   2604  1776 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
> hasi      128057  0.0  0.0   2604  1992 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
> root      128062  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:8+flush-btrfs-1]
> root      128063  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:9+flush-btrfs-1]
> hasi      128071  0.0  0.0   2604  1916 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
> hasi      128080  0.0  0.0   2604  1716 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
> hasi      128087  0.0  0.0   2604  1864 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
> hasi      128094  0.0  0.0   2604  1596 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
> hasi      128104  0.0  0.0   2604  1776 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
> hasi      128112  0.0  0.0   2604  1772 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
> hasi      128119  0.0  0.0   2604  1688 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
> hasi      128131  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
> hasi      128138  0.0  0.0   2604  1944 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
> hasi      128149  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
> hasi      128157  0.0  0.0   2604  1804 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
> hasi      128164  0.0  0.0   2604  1944 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
> hasi      128171  0.0  0.0   2604  1880 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
> hasi      128180  0.0  0.0   2604  1716 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
> hasi      128187  0.0  0.0   2604  1596 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
> hasi      128201  0.0  0.0   2604  1596 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
> hasi      128209  0.0  0.0   2740  1904 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
> hasi      128216  0.0  0.0   4408  3012 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
> root      128217  0.0  0.0      0     0 ?        D    09:56   0:00 [kworker/u64:10+flush-btrfs-1]
> hasi      128228  0.0  0.0   2604  1728 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
> hasi      128235  0.0  0.0   2604  2044 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
> hasi      128246  0.0  0.0   2604  1844 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
> root      128250  0.0  0.0      0     0 ?        D    09:57   0:00 [kworker/u64:11+flush-btrfs-1]
> hasi      128260  0.0  0.0   4408  2696 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
> hasi      128263  0.0  0.0   2604  1688 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
> hasi      128277  0.0  0.0   2604  1908 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
> root      128282  0.0  0.0      0     0 ?        D    09:58   0:00 [kworker/u64:12+flush-btrfs-1]
> hasi      128289  0.0  0.0   2604  1752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
> hasi      128303  0.0  0.0   2604  1716 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
> hasi      128306  0.0  0.0   4408  2752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
> hasi      128315  0.0  0.0   2604  1688 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
> hasi      128322  0.0  0.0   2604  1864 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
> hasi      128329  0.0  0.0   2604  2072 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
> hasi      128338  0.0  0.0   2604  1728 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
> root      128345  0.0  0.0      0     0 ?        D    10:00   0:00 [kworker/u64:13+flush-btrfs-1]
> hasi      128361  0.0  0.0   2604  2044 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
> root      128364  0.0  0.0      0     0 ?        D    10:01   0:00 [kworker/u64:15+flush-btrfs-1]
> hasi      128372  0.0  0.0   2604  1772 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
> hasi      128381  0.0  0.0   2604  1732 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
> hasi      128390  0.0  0.0   4408  2752 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
> hasi      128397  0.0  0.0   2604  1700 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
> hasi      128400  0.0  0.0   2604  1688 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
> hasi      128414  0.0  0.0   2604  1596 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
> hasi      128421  0.0  0.0   2604  1728 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
> hasi      128433  0.0  0.0   2604  1872 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
> hasi      128440  0.0  0.0   2604  1772 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
> hasi      128459  0.0  0.0   2604  1700 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
> hasi      128466  0.0  0.0   2604  2044 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
> hasi      128475  0.0  0.0   4456  2944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
> hasi      128485  0.0  0.0   2604  1688 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
> hasi      128493  0.0  0.0   2604  1944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
> hasi      128500  0.0  0.0   2604  1804 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
> root      128504  0.0  0.0      0     0 ?        D    10:05   0:00 [kworker/u64:16+flush-btrfs-1]
> hasi      128515  0.0  0.0   2604  1688 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
> root      128516  0.0  0.0      0     0 ?        D    10:06   0:00 [kworker/u64:17+flush-btrfs-1]
> hasi      128524  0.0  0.0   2604  1916 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
> hasi      128533  0.0  0.0   2604  1752 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
> hasi      128540  0.0  0.0   2604  1732 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
> hasi      128547  0.0  0.0   2604  2072 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
> hasi      128560  0.0  0.0   2604  1688 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
> hasi      128567  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
> root      128572  0.0  0.0      0     0 ?        D    10:07   0:00 [kworker/u64:18+flush-btrfs-1]
> hasi      128576  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
> hasi      128588  0.0  0.0   2604  2000 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
> hasi      128599  0.0  0.0   2604  1900 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
> root      128601  0.0  0.0      0     0 ?        D    10:08   0:00 [kworker/u64:19+flush-btrfs-1]
> hasi      128612  0.0  0.0   2604  1944 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
> hasi      128619  0.0  0.0   2604  1728 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
> hasi      128627  0.0  0.0   2604  1596 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
> hasi      128634  0.0  0.0   2604  1700 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
> hasi      128641  0.0  0.0   2604  1908 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
> hasi      128650  0.0  0.0   2604  2016 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
> root      128652  0.0  0.0      0     0 ?        D    10:09   0:00 [kworker/u64:20+flush-btrfs-1]
> root      128656  0.0  0.0      0     0 ?        D    10:10   0:00 [kworker/u64:21+flush-btrfs-1]
> hasi      128667  0.0  0.0   2604  1904 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
> root      128670  0.0  0.0      0     0 ?        D    10:11   0:00 [kworker/u64:22+flush-btrfs-1]
> hasi      128678  0.0  0.0   2604  1688 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
> hasi      128687  0.0  0.0   2604  1732 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
> hasi      128694  0.0  0.0   2604  1752 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
> hasi      128701  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
> hasi      128711  0.0  0.0   2604  1596 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
> hasi      128718  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
> root      128724  0.0  0.0      0     0 ?        D    10:12   0:00 [kworker/u64:27+flush-btrfs-1]
> hasi      128727  0.0  0.0   2604  1864 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
> hasi      128741  0.0  0.0   2604  1776 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
> hasi      128748  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
> root      128749  0.0  0.0      0     0 ?        D    10:13   0:00 [kworker/u64:30+flush-btrfs-1]
> hasi      128760  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
> hasi      128767  0.0  0.0   2604  1908 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
> hasi      128776  0.0  0.0   2604  1864 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
> hasi      128786  0.0  0.0   2604  1688 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
> hasi      128793  0.0  0.0   2604  1872 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
> hasi      128800  0.0  0.0   2604  1772 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
> hasi      128817  0.0  0.0   2604  1916 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
> hasi      128825  0.0  0.0   2604  2072 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
> hasi      128835  0.0  0.0   2604  2000 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
> hasi      128843  0.0  0.0   2604  1704 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
> hasi      128850  0.0  0.0   2604  1716 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
> hasi      128860  0.0  0.0   2780  1904 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
> hasi      128869  0.0  0.0   2604  1728 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
> root      128871  0.0  0.0      0     0 ?        D    10:17   0:00 [kworker/u64:31+flush-btrfs-1]
> hasi      128878  0.0  0.0   2604  1916 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
> hasi      128899  0.0  0.0   2604  1880 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
> hasi      128907  0.0  0.0   2604  1944 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
> hasi      128919  0.0  0.0   2604  1844 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
> root      128922  0.0  0.0      0     0 ?        D    10:19   0:00 [kworker/u64:33+flush-btrfs-1]
> hasi      128929  0.0  0.0   2604  1752 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
> hasi      128937  0.0  0.0   2604  1596 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
> hasi      128944  0.0  0.0   2604  1716 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
> hasi      128951  0.0  0.0   2604  1804 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
> hasi      128958  0.0  0.0   2604  2072 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
> root      128965  0.0  0.0      0     0 ?        D    10:20   0:00 [kworker/u64:36+flush-btrfs-1]
> hasi      128976  0.0  0.0   2604  1716 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
> root      128977  0.0  0.0      0     0 ?        D    10:21   0:00 [kworker/u64:37+flush-btrfs-1]
> hasi      128985  0.0  0.0   2604  1772 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
> hasi      128994  0.0  0.0   2604  1916 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
> hasi      129001  0.0  0.0   2604  1944 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
> hasi      129008  0.0  0.0   2604  1916 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
> hasi      129018  0.0  0.0   2604  1804 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
> hasi      129025  0.0  0.0   2604  1704 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
> hasi      129033  0.0  0.0   2604  1728 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
> hasi      129045  0.0  0.0   2604  1688 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
> hasi      129052  0.0  0.0   2604  1732 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
> hasi      129063  0.0  0.0   2604  1872 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
> hasi      129070  0.0  0.0   2604  1968 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
> root      129074  0.0  0.0      0     0 ?        D    10:24   0:00 [kworker/u64:38+flush-btrfs-1]
> hasi      129082  0.0  0.0   2604  1804 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
> hasi      129089  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
> hasi      129096  0.0  0.0   2604  1716 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
> hasi      129103  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
> hasi      129117  0.0  0.0   2604  1732 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
> hasi      129125  0.0  0.0   2604  2000 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
> hasi      129139  0.0  0.0   2604  1776 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
> work      129145  0.0  0.0  97816 22024 ?        Dl   10:26   0:00 smbd: client [192.168.67.33]
> root      129161  0.0  0.0      0     0 ?        D    10:27   0:00 [kworker/u64:39+flush-btrfs-1]
> hasi      129168  0.0  0.0   2604  1716 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
> hasi      129175  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
> hasi      129185  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
> hasi      129192  0.0  0.0   2604  1844 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
> hasi      129203  0.0  0.0   2604  1688 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
> hasi      129216  0.0  0.0   2604  1688 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
> hasi      129223  0.0  0.0   2604  1992 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
> hasi      129236  0.0  0.0   2604  2072 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
> hasi      129246  0.0  0.0   2604  1596 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
> hasi      129255  0.0  0.0   2604  1944 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
> hasi      129262  0.0  0.0   2604  1716 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
> root      129263  0.0  0.0      0     0 ?        D    10:29   0:00 [kworker/u64:40+flush-btrfs-1]
> hasi      129270  0.0  0.0   2604  1700 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
> hasi      129277  0.0  0.0   2604  1772 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
> hasi      129292  0.0  0.0   2604  1776 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
> hasi      129300  0.0  0.0   2604  1732 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
> hasi      129309  0.0  0.0   2604  1700 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
> hasi      129319  0.0  0.0   2604  1880 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
> hasi      129330  0.0  0.0   2604  1864 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
> hasi      129341  0.0  0.0   2604  1688 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
> hasi      129348  0.0  0.0   2604  1716 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
> hasi      129355  0.0  0.0   2604  1772 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
> root      129362  0.0  0.0      0     0 ?        D    10:33   0:00 [kworker/u64:41+flush-btrfs-1]
> hasi      129369  0.0  0.0   2604  1944 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
> hasi      129376  0.0  0.0   2604  1776 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
> hasi      129394  0.0  0.0   2604  1752 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
> hasi      129401  0.0  0.0   2604  1864 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
> hasi      129409  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
> hasi      129416  0.0  0.0   2604  1900 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
> hasi      129425  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
> hasi      129432  0.0  0.0   2604  1728 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
> hasi      129450  0.0  0.0   2604  1700 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
> hasi      129458  0.0  0.0   2604  1856 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
> hasi      129472  0.0  0.0   2604  1752 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
> hasi      129479  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
> root      129480  0.0  0.0      0     0 ?        D    10:37   0:00 [kworker/u64:43+flush-btrfs-1]
> hasi      129487  0.0  0.0   2604  1704 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
> hasi      129497  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
> hasi      129504  0.0  0.0   2604  1856 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
> hasi      129512  0.0  0.0   2604  1700 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
> root      129519  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:44+flush-btrfs-1]
> hasi      129526  0.0  0.0   2604  2044 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
> hasi      129535  0.0  0.0   2604  1752 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
> root      129541  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:45+flush-btrfs-1]
> hasi      129554  0.0  0.0   2736  1772 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
> root      129560  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:46+flush-btrfs-1]
> hasi      129567  0.0  0.0   2604  1916 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
> hasi      129575  0.0  0.0   2604  1872 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
> hasi      129582  0.0  0.0   2604  1688 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
> hasi      129589  0.0  0.0   2604  1972 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
> hasi      129597  0.0  0.0   2604  1596 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
> root      129598  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:47+flush-btrfs-1]
> root      129599  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:49+flush-btrfs-1]
> hasi      129613  0.0  0.0   2604  1916 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
> hasi      129622  0.0  0.0   2604  1728 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
> hasi      129631  0.0  0.0   2604  1944 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
> hasi      129638  0.0  0.0   2604  1960 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
> hasi      129647  0.0  0.0   2604  1916 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
> hasi      129657  0.0  0.0   2604  1776 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
> hasi      129664  0.0  0.0   2604  1728 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
> hasi      129671  0.0  0.0   2604  1716 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
> root      129679  0.0  0.0      0     0 ?        D    10:43   0:00 [kworker/u64:50+flush-btrfs-1]
> hasi      129686  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
> hasi      129693  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
> hasi      129704  0.0  0.0   2604  1688 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
> hasi      129711  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
> hasi      129719  0.0  0.0   2604  1700 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
> hasi      129726  0.0  0.0   2604  1944 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
> hasi      129733  0.0  0.0   2604  1620 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
> hasi      129740  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
> hasi      129754  0.0  0.0   2604  1776 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
> hasi      129763  0.0  0.0   2604  2000 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
> hasi      129776  0.0  0.0   2604  2016 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
> hasi      129784  0.0  0.0   2604  1696 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
> hasi      129791  0.0  0.0   2604  2000 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
> hasi      129803  0.0  0.0   2604  1716 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
> hasi      129810  0.0  0.0   2604  1700 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
> root      129811  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:51+flush-btrfs-1]
> root      129812  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:53+flush-btrfs-1]
> hasi      129819  0.0  0.0   2604  1688 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
> root      129823  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:54+flush-btrfs-1]
> hasi      129836  0.0  0.0   2604  1716 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
> hasi      129843  0.0  0.0   2604  1856 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
> hasi      129856  0.0  0.0   2604  1992 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
> root      129861  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:55+flush-btrfs-1]
> root      129862  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:56+flush-btrfs-1]
> root      129863  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:58+flush-btrfs-1]
> hasi      129870  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
> hasi      129878  0.0  0.0   2604  1804 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
> hasi      129885  0.0  0.0   2604  1872 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
> hasi      129892  0.0  0.0   2604  1904 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
> hasi      129902  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
> hasi      129917  0.0  0.0   2604  1596 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
> hasi      129925  0.0  0.0   2604  1772 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
> hasi      129934  0.0  0.0   2604  1944 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
> hasi      129941  0.0  0.0   2604  1776 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
> hasi      129949  0.0  0.0   2604  1716 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
> hasi      129958  0.0  0.0   2604  1596 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
> hasi      129965  0.0  0.0   2604  1728 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
> hasi      129972  0.0  0.0   2604  1752 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
> hasi      129985  0.0  0.0   2604  1772 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
> hasi      129992  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
> hasi      130004  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
> hasi      130011  0.0  0.0   2604  1844 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
> hasi      130020  0.0  0.0   2604  1732 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
> hasi      130027  0.0  0.0   2604  1596 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
> hasi      130034  0.0  0.0   2604  1752 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
> hasi      130041  0.0  0.0   2604  1772 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
> hasi      130055  0.0  0.0   2604  1700 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
> hasi      130063  0.0  0.0   2604  1992 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
> hasi      130079  0.0  0.0   2604  1916 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
> hasi      130086  0.0  0.0   2604  1596 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
> hasi      130097  0.0  0.0   2604  1688 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
> hasi      130105  0.0  0.0   2604  1732 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
> hasi      130109  0.0  0.0   2604  1752 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
> hasi      130114  0.0  0.0   2604  1916 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
> root      130127  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/u64:59+flush-btrfs-1]
> hasi      130138  0.0  0.0   2604  1716 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
> hasi      130145  0.0  0.0   2604  1944 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
> hasi      130156  0.0  0.0   2604  1900 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
> root      130161  0.0  0.0      0     0 ?        D    10:59   0:00 [kworker/u64:60+flush-btrfs-1]
> hasi      130168  0.0  0.0   2604  1872 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
> hasi      130176  0.0  0.0   2604  1916 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
> hasi      130183  0.0  0.0   2604  1688 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
> hasi      130190  0.0  0.0   2604  1832 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
> hasi      130199  0.0  0.0   2604  1704 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
> root      130206  0.0  0.0      0     0 ?        D    11:00   0:00 [kworker/u64:61+flush-btrfs-1]
> hasi      130222  0.0  0.0   2604  1776 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
> hasi      130232  0.0  0.0   2604  1728 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
> hasi      130239  0.0  0.0   2604  1772 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
> hasi      130246  0.0  0.0   2604  1752 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
> hasi      130254  0.0  0.0   2604  1716 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
> hasi      130263  0.0  0.0   2604  1864 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
> hasi      130270  0.0  0.0   2604  1872 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
> hasi      130279  0.0  0.0   2604  1700 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
> root      130281  0.0  0.0      0     0 ?        D    11:02   0:00 [kworker/u64:62+flush-btrfs-1]
> hasi      130291  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
> hasi      130298  0.0  0.0   2604  1804 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
> hasi      130310  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
> hasi      130322  0.0  0.0   2604  1916 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
> hasi      130332  0.0  0.0   2604  1864 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
> hasi      130339  0.0  0.0   2604  1704 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
> hasi      130349  0.0  0.0   2604  1700 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
> hasi      130356  0.0  0.0   2604  1716 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
> hasi      130370  0.0  0.0   2604  1864 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
> hasi      130378  0.0  0.0   2604  1880 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
> hasi      130391  0.0  0.0   2604  2016 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
> root      130393  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:63+flush-btrfs-1]
> root      130394  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:64+flush-btrfs-1]
> hasi      130402  0.0  0.0   2604  1704 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
> hasi      130410  0.0  0.0   2604  2072 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
> hasi      130420  0.0  0.0   2604  1716 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
> hasi      130427  0.0  0.0   2604  1700 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
> hasi      130436  0.0  0.0   2604  1864 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
> hasi      130448  0.0  0.0   2604  1904 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
> hasi      130458  0.0  0.0   2604  1844 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
> hasi      130470  0.0  0.0   2604  1856 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
> root      130475  0.0  0.0      0     0 ?        D    11:09   0:00 [kworker/u64:65+flush-btrfs-1]
> hasi      130482  0.0  0.0   2604  1916 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
> hasi      130490  0.0  0.0   2604  1688 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
> hasi      130497  0.0  0.0   2604  1732 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
> hasi      130504  0.0  0.0   2604  1872 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
> hasi      130511  0.0  0.0   2604  1944 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
> hasi      130525  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
> hasi      130533  0.0  0.0   2604  1872 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
> hasi      130542  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
> hasi      130549  0.0  0.0   2604  1872 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
> hasi      130557  0.0  0.0   2604  1804 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
> hasi      130566  0.0  0.0   2604  1772 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
> hasi      130573  0.0  0.0   2604  1728 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
> hasi      130582  0.0  0.0   2604  1716 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
> hasi      130593  0.0  0.0   2604  1864 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
> hasi      130600  0.0  0.0   2604  1716 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
> hasi      130611  0.0  0.0   2604  1776 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
> hasi      130618  0.0  0.0   2604  1900 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
> hasi      130630  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
> hasi      130637  0.0  0.0   2604  1700 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
> hasi      130644  0.0  0.0   2604  1596 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
> hasi      130651  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
> hasi      130668  0.0  0.0   2604  1772 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
> hasi      130676  0.0  0.0   2700  1988 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
> hasi      130693  0.0  0.0   2604  1716 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
> root      130694  0.0  0.0      0     0 ?        D    11:17   0:00 [kworker/u64:66+flush-btrfs-1]
> hasi      130701  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
> hasi      130709  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
> hasi      130718  0.0  0.0   2604  1716 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
> hasi      130725  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
> hasi      130734  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
> hasi      130745  0.0  0.0   2604  1732 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
> hasi      130752  0.0  0.0   2604  2072 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
> root      130767  0.0  0.0      0     0 ?        D    11:18   0:00 [kworker/u64:67+flush-btrfs-1]
> hasi      130775  0.0  0.0   2604  1904 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
> root      130781  0.0  0.0      0     0 ?        D    11:19   0:00 [kworker/u64:69+flush-btrfs-1]
> hasi      130788  0.0  0.0   2604  1716 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
> hasi      130796  0.0  0.0   2604  1864 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
> hasi      130803  0.0  0.0   2604  1776 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
> hasi      130810  0.0  0.0   2604  1944 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
> hasi      130817  0.0  0.0   2604  1916 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
> hasi      130831  0.0  0.0   2604  1704 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
> hasi      130841  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
> hasi      130848  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
> hasi      130855  0.0  0.0   2604  1828 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
> hasi      130870  0.0  0.0   2604  1872 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
> root      130871  0.0  0.0      0     0 ?        D    11:22   0:00 [kworker/u64:70+flush-btrfs-1]
> hasi      130880  0.0  0.0   2604  1688 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
> hasi      130887  0.0  0.0   2604  1776 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
> hasi      130896  0.0  0.0   2604  1804 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
> hasi      130907  0.0  0.0   2604  1752 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
> hasi      130914  0.0  0.0   2604  1728 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
> root      130915  0.0  0.0      0     0 ?        D    11:23   0:00 [kworker/u64:72+flush-btrfs-1]
> hasi      130926  0.0  0.0   2604  1732 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
> hasi      130933  0.0  0.0   2604  1596 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
> hasi      130944  0.0  0.0   4408  2808 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
> hasi      130947  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
> hasi      130955  0.0  0.0   2604  1728 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
> hasi      130962  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
> hasi      130977  0.0  0.0   2604  1804 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
> hasi      130985  0.0  0.0   2604  1900 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
> hasi      130995  0.0  0.0   2604  1704 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
> hasi      131024  0.0  0.0   2604  1716 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
> hasi      131031  0.0  0.0   2604  1804 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
> hasi      131040  0.0  0.0   2676  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
> hasi      131048  0.0  0.0   2604  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
> root      131064  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:73+flush-btrfs-1]
> hasi      131065  0.0  0.0   2604  1688 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
> root      131067  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:74+flush-btrfs-1]
> hasi      131077  0.0  0.0   2604  2044 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
> hasi      131085  0.0  0.0   2604  1804 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
> hasi      131096  0.0  0.0   2604  1752 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
> hasi      131103  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
> hasi      131112  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
> hasi      131119  0.0  0.0   2604  1804 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
> hasi      131126  0.0  0.0   2604  1704 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
> hasi      131133  0.0  0.0   2676  2000 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
> hasi      131150  0.0  0.0   2604  1752 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
> hasi      131158  0.0  0.0   2604  1872 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
> hasi      131167  0.0  0.0   2604  1596 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
> hasi      131178  0.0  0.0   2604  1844 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
> root      131184  0.0  0.0      0     0 ?        D    11:32   0:00 [kworker/u64:75+flush-btrfs-1]
> hasi      131191  0.0  0.0   2604  1944 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
> hasi      131200  0.0  0.0   2604  1596 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
> hasi      131207  0.0  0.0   2604  1916 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
> hasi      131216  0.0  0.0   2604  1732 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
> hasi      131227  0.0  0.0   2604  1728 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
> hasi      131234  0.0  0.0   2604  1864 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
> hasi      131250  0.0  0.0   2604  1716 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
> root      131251  0.0  0.0      0     0 ?        D    11:34   0:00 [kworker/u64:76+flush-btrfs-1]
> hasi      131258  0.0  0.0   2604  2004 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
> hasi      131267  0.0  0.0   2604  1916 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
> hasi      131274  0.0  0.0   2604  1772 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
> hasi      131281  0.0  0.0   2604  1728 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
> hasi      131288  0.0  0.0   2604  1872 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
> hasi      131301  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
> hasi      131309  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
> hasi      131318  0.0  0.0   2604  1872 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
> hasi      131326  0.0  0.0   2604  1944 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
> hasi      131333  0.0  0.0   2676  1844 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
> hasi      131344  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
> hasi      131351  0.0  0.0   2604  1880 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
> root      131357  0.0  0.0      0     0 ?        D    11:37   0:00 [kworker/u64:77+flush-btrfs-1]
> hasi      131367  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
> root      131372  0.0  0.0      0     0 ?        D    11:38   0:00 [kworker/u64:78+flush-btrfs-1]
> hasi      131379  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
> hasi      131386  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
> hasi      131397  0.0  0.0   2604  1804 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
> hasi      131404  0.0  0.0   2604  1716 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
> hasi      131412  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
> hasi      131419  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
> hasi      131426  0.0  0.0   2604  1900 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
> hasi      131435  0.0  0.0   2604  1700 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
> hasi      131456  0.0  0.0   2604  1904 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
> hasi      131464  0.0  0.0   4408  2968 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
> root      131465  0.0  0.0      0     0 ?        D    11:41   0:00 [kworker/u64:81+flush-btrfs-1]
> hasi      131476  0.0  0.0   2604  1700 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
> hasi      131484  0.0  0.0   2604  2000 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
> hasi      131495  0.0  0.0   2604  1728 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
> hasi      131504  0.0  0.0   2604  1804 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
> hasi      131511  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
> hasi      131520  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
> root      131522  0.0  0.0      0     0 ?        D    11:42   0:00 [kworker/u64:82+flush-btrfs-1]
> hasi      131532  0.0  0.0   2604  1944 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
> hasi      131539  0.0  0.0   2604  1716 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
> hasi      131550  0.0  0.0   2604  1704 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
> hasi      131557  0.0  0.0   2604  1776 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
> hasi      131565  0.0  0.0   2604  1864 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
> hasi      131572  0.0  0.0   2604  1688 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
> hasi      131579  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
> hasi      131586  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
> hasi      131600  0.0  0.0   2604  1864 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
> hasi      131608  0.0  0.0   2604  1688 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
> hasi      131617  0.0  0.0   2604  2000 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
> hasi      131630  0.0  0.0   2604  1944 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
> hasi      131637  0.0  0.0   2604  1700 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
> hasi      131646  0.0  0.0   2604  2000 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
> root      131654  0.0  0.0      0     0 ?        D    11:47   0:00 [kworker/u64:84+flush-btrfs-1]
> hasi      131655  0.0  0.0   2604  1904 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
> hasi      131667  0.0  0.0   2604  1776 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
> root      131674  0.0  0.0      0     0 ?        D    11:48   0:00 [kworker/u64:86+flush-btrfs-1]
> hasi      131681  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
> hasi      131688  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
> hasi      131700  0.0  0.0   2604  1776 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
> hasi      131707  0.0  0.0   2604  1596 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
> hasi      131715  0.0  0.0   2604  1716 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
> hasi      131722  0.0  0.0   2604  1864 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
> hasi      131729  0.0  0.0   2604  2072 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
> hasi      131741  0.0  0.0   2604  1732 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
> hasi      131757  0.0  0.0   2604  1844 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
> root      131761  0.0  0.0      0     0 ?        D    11:51   0:00 [kworker/u64:87+flush-btrfs-1]
> hasi      131769  0.0  0.0   2604  1716 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
> hasi      131778  0.0  0.0   2604  1752 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
> root      131779  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:89+flush-btrfs-1]
> hasi      131787  0.0  0.0   2604  1944 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
> hasi      131794  0.0  0.0   2604  1728 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
> hasi      131803  0.0  0.0   2604  1716 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
> hasi      131810  0.0  0.0   2604  1596 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
> hasi      131819  0.0  0.0   2604  1864 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
> root      131822  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:90+flush-btrfs-1]
> hasi      131831  0.0  0.0   2604  1804 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
> hasi      131838  0.0  0.0   2604  1752 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
> hasi      131849  0.0  0.0   2604  1596 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
> hasi      131856  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
> hasi      131864  0.0  0.0   2604  1728 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
> hasi      131871  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
> hasi      131878  0.0  0.0   2604  1944 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
> hasi      131885  0.0  0.0   2604  1732 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
> hasi      131900  0.0  0.0   2604  1728 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
> hasi      131908  0.0  0.0   2604  1864 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
> hasi      131917  0.0  0.0   2676  1828 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
> hasi      131930  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
> hasi      131937  0.0  0.0   2604  1772 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
> hasi      131946  0.0  0.0   2604  2016 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
> hasi      131956  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
> hasi      131965  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
> hasi      131998  0.0  0.0   2604  1716 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
> hasi      132005  0.0  0.0   2604  1872 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
> hasi      132016  0.0  0.0   2604  1772 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
> hasi      132023  0.0  0.0   2604  1908 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
> hasi      132031  0.0  0.0   2604  1728 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
> hasi      132038  0.0  0.0   2604  1804 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
> hasi      132045  0.0  0.0   2604  1844 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
> hasi      132057  0.0  0.0   2604  1716 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
> root      132058  0.0  0.0      0     0 ?        D    11:59   0:00 [kworker/u64:92+flush-btrfs-1]
> hasi      132080  0.0  0.0   2604  1908 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
> hasi      132093  0.0  0.0   2604  1916 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
> root      132094  0.0  0.0      0     0 ?        D    12:01   0:00 [kworker/u64:93+flush-btrfs-1]
> hasi      132101  0.0  0.0   2604  1944 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
> root      132103  0.0  0.0      0     0 ?        D    12:02   0:00 [kworker/u64:94+flush-btrfs-1]
> hasi      132110  0.0  0.0   2604  1716 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
> hasi      132117  0.0  0.0   2604  1704 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
> hasi      132126  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
> hasi      132133  0.0  0.0   2604  1944 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
> hasi      132142  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
> hasi      132153  0.0  0.0   2604  1772 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
> hasi      132160  0.0  0.0   2604  1916 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
> hasi      132172  0.0  0.0   2604  1716 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
> hasi      132184  0.0  0.0   2604  1728 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
> hasi      132192  0.0  0.0   2604  1712 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
> hasi      132201  0.0  0.0   2604  1864 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
> hasi      132208  0.0  0.0   2604  1596 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
> hasi      132215  0.0  0.0   2604  1944 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
> hasi      132229  0.0  0.0   2604  1804 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
> hasi      132237  0.0  0.0   2604  1728 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
> hasi      132246  0.0  0.0   2604  1932 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
> hasi      132261  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
> hasi      132268  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
> hasi      132277  0.0  0.0   2676  1816 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
> hasi      132285  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
> root      132286  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:96+flush-btrfs-1]
> hasi      132295  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
> root      132296  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:97+flush-btrfs-1]
> hasi      132307  0.0  0.0   2604  1728 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
> hasi      132314  0.0  0.0   2604  1732 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
> hasi      132325  0.0  0.0   2604  1932 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
> hasi      132334  0.0  0.0   2604  1716 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
> hasi      132342  0.0  0.0   2604  1596 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
> hasi      132349  0.0  0.0   2604  1804 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
> hasi      132356  0.0  0.0   2604  1844 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
> hasi      132368  0.0  0.0   2604  1932 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
> root      132369  0.0  0.0      0     0 ?        D    12:09   0:00 [kworker/u64:98+flush-btrfs-1]
> hasi      132383  0.0  0.0   2604  1772 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
> root      132384  0.0  0.0      0     0 ?        D    12:11   0:00 [kworker/u64:99+flush-btrfs-1]
> hasi      132392  0.0  0.0   2604  1688 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
> hasi      132401  0.0  0.0   2604  1728 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
> hasi      132409  0.0  0.0   2604  1864 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
> hasi      132416  0.0  0.0   2604  1596 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
> hasi      132425  0.0  0.0   2604  1752 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
> hasi      132432  0.0  0.0   2604  1700 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
> hasi      132441  0.0  0.0   2604  1704 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
> hasi      132452  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
> hasi      132459  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
> hasi      132470  0.0  0.0   2604  1944 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
> hasi      132477  0.0  0.0   2604  1752 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
> hasi      132485  0.0  0.0   2604  1944 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
> hasi      132492  0.0  0.0   2604  1776 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
> hasi      132499  0.0  0.0   2604  1864 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
> hasi      132506  0.0  0.0   2604  1772 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
> hasi      132523  0.0  0.0   2604  1772 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
> hasi      132531  0.0  0.0   2604  1904 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
> hasi      132542  0.0  0.0   2604  2000 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
> root      132556  0.0  0.0      0     0 ?        D    12:17   0:00 [kworker/u64:100+flush-btrfs-1]
> hasi      132557  0.0  0.0   2604  1872 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
> hasi      132564  0.0  0.0   2604  1728 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
> hasi      132578  0.0  0.0   2604  1704 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
> hasi      132585  0.0  0.0   2604  1916 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
> hasi      132594  0.0  0.0   2604  1864 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
> hasi      132608  0.0  0.0   2604  1816 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
> hasi      132617  0.0  0.0   2604  1772 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
> root      132618  0.0  0.0      0     0 ?        D    12:18   0:00 [kworker/u64:101+flush-btrfs-1]
> hasi      132630  0.0  0.0   2604  1932 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
> hasi      132640  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
> hasi      132648  0.0  0.0   2604  1752 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
> hasi      132655  0.0  0.0   2604  1916 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
> hasi      132662  0.0  0.0   2604  1844 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
> hasi      132672  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
> hasi      132686  0.0  0.0   2604  1960 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
> hasi      132697  0.0  0.0   2604  1776 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
> hasi      132704  0.0  0.0   2604  1752 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
> root      132705  0.0  0.0      0     0 ?        D    12:22   0:00 [kworker/u64:103+flush-btrfs-1]
> hasi      132713  0.0  0.0   2604  1872 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
> hasi      132720  0.0  0.0   2604  1716 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
> hasi      132729  0.0  0.0   2604  1700 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
> hasi      132736  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
> hasi      132745  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
> hasi      132756  0.0  0.0   2604  1596 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
> hasi      132763  0.0  0.0   2604  1916 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
> hasi      132774  0.0  0.0   2604  1688 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> hasi      132781  0.0  0.0   2604  1700 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> hasi      132789  0.0  0.0   2604  2016 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> hasi      132800  0.0  0.0   2604  1732 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> hasi      132807  0.0  0.0   2604  1728 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> hasi      132814  0.0  0.0   2604  1804 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
> root      132815  0.0  0.0      0     0 ?        D    12:24   0:00 [kworker/u64:104+flush-btrfs-1]
> hasi      132829  0.0  0.0   2604  1704 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
> hasi      132839  0.0  0.0   2740  2000 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
> hasi      132847  0.0  0.0   2604  1960 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
> root      132851  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:105+flush-btrfs-1]
> root      132852  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:107+flush-btrfs-1]
> hasi      132860  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
> hasi      132867  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
> hasi      132876  0.0  0.0   2604  1944 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
> hasi      132883  0.0  0.0   2604  1856 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
> hasi      132893  0.0  0.0   2604  1596 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
> hasi      132925  0.0  0.0   2604  1932 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
> hasi      132935  0.0  0.0   2604  1716 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
> hasi      132946  0.0  0.0   2604  2044 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> root      132948  0.0  0.0      0     0 ?        D    12:29   0:00 [kworker/u64:109+flush-btrfs-1]
> hasi      132955  0.0  0.0   2604  1716 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> hasi      132963  0.0  0.0   2604  1732 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> hasi      132970  0.0  0.0   2604  1596 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> hasi      132977  0.0  0.0   2604  1804 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> hasi      132985  0.0  0.0   2604  2016 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
> hasi      133001  0.0  0.0   2604  1944 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
> hasi      133011  0.0  0.0   2604  1916 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
> hasi      133018  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> root      133022  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:110+flush-btrfs-1]
> hasi      133030  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> hasi      133037  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> root      133038  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:111+flush-btrfs-1]
> hasi      133047  0.0  0.0   2604  1728 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> hasi      133054  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> root      133055  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:112+flush-btrfs-1]
> hasi      133064  0.0  0.0   2604  1700 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
> hasi      133075  0.0  0.0   2604  1772 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
> hasi      133082  0.0  0.0   2604  1596 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
> hasi      133098  0.0  0.0   2604  1596 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133106  0.0  0.0   2604  2016 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133115  0.0  0.0   2604  1944 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133124  0.0  0.0   2604  1732 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133131  0.0  0.0   2604  1716 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133138  0.0  0.0   2604  1772 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
> hasi      133157  0.0  0.0   2604  1864 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
> hasi      133167  0.0  0.0   2604  1844 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
> hasi      133175  0.0  0.0   2604  1900 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133188  0.0  0.0   2604  1732 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133195  0.0  0.0   2604  1824 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133204  0.0  0.0   2604  1704 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133211  0.0  0.0   2604  1944 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133221  0.0  0.0   2604  2044 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
> hasi      133234  0.0  0.0   2604  1716 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
> hasi      133241  0.0  0.0   2604  1972 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
> root      133243  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:113+flush-btrfs-1]
> root      133249  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:115+flush-btrfs-1]
> hasi      133262  0.0  0.0   2604  1872 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> hasi      133269  0.0  0.0   2604  1704 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> hasi      133277  0.0  0.0   2604  1804 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> hasi      133284  0.0  0.0   2604  1716 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> hasi      133291  0.0  0.0   2604  1900 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> hasi      133299  0.0  0.0   2604  1904 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
> root      133310  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:116+flush-btrfs-1]
> hasi      133318  0.0  0.0   2604  2016 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
> root      133322  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:117+flush-btrfs-1]
> hasi      133332  0.0  0.0   2604  1716 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
> hasi      133339  0.0  0.0   2604  1688 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> hasi      133348  0.0  0.0   2604  1900 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> hasi      133357  0.0  0.0   2604  1772 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> hasi      133367  0.0  0.0   2604  1864 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> hasi      133374  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> hasi      133383  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
> root      133388  0.0  0.0      0     0 ?        D    12:42   0:00 [kworker/u64:118+flush-btrfs-1]
> root      133389  0.0  0.0      0     0 ?        D    12:43   0:00 [kworker/u64:120+flush-btrfs-1]
> hasi      133396  0.0  0.0   2604  1728 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
> hasi      133403  0.0  0.0   2604  1596 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
> hasi      133414  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133421  0.0  0.0   2604  1864 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133429  0.0  0.0   2604  1872 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133436  0.0  0.0   2604  1596 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133443  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133450  0.0  0.0   2604  1688 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
> hasi      133464  0.0  0.0   2604  1916 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
> hasi      133474  0.0  0.0   2604  1732 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
> hasi      133481  0.0  0.0   2604  1972 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> hasi      133501  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> hasi      133504  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> root      133505  0.0  0.0      0     0 ?        D    12:47   0:00 [kworker/u64:121+flush-btrfs-1]
> hasi      133514  0.0  0.0   2604  1688 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> hasi      133521  0.0  0.0   2604  1932 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> hasi      133533  0.0  0.0   2604  1752 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
> hasi      133547  0.0  0.0   2604  1776 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
> hasi      133554  0.0  0.0   2604  1908 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
> hasi      133568  0.0  0.0   2604  1944 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> hasi      133575  0.0  0.0   2604  1864 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> root      133576  0.0  0.0      0     0 ?        D    12:49   0:00 [kworker/u64:122+flush-btrfs-1]
> hasi      133584  0.0  0.0   2604  1916 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> hasi      133591  0.0  0.0   2604  1596 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> hasi      133598  0.0  0.0   2604  2016 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> hasi      133606  0.0  0.0   2604  2000 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
> hasi      133624  0.0  0.0   2604  1704 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
> hasi      133634  0.0  0.0   2604  1716 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
> hasi      133641  0.0  0.0   2604  1704 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> hasi      133652  0.0  0.0   2604  1696 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> hasi      133656  0.0  0.0   2604  1904 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> hasi      133667  0.0  0.0   2604  1872 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> hasi      133674  0.0  0.0   2604  1944 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> root      133675  0.0  0.0      0     0 ?        D    12:52   0:00 [kworker/u64:123+flush-btrfs-1]
> hasi      133684  0.0  0.0   2604  1772 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
> hasi      133695  0.0  0.0   2604  1716 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
> hasi      133702  0.0  0.0   2604  1596 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
> hasi      133713  0.0  0.0   2604  1704 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> hasi      133721  0.0  0.0   2604  1688 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> root      133722  0.0  0.0      0     0 ?        D    12:54   0:00 [kworker/u64:125+flush-btrfs-1]
> hasi      133730  0.0  0.0   2604  1804 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> hasi      133737  0.0  0.0   2736  1908 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> hasi      133746  0.0  0.0   2604  1732 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> hasi      133753  0.0  0.0   2604  1716 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
> root      133757  0.0  0.0      0     0 ?        D    12:55   0:00 [kworker/u64:126+flush-btrfs-1]
> hasi      133769  0.0  0.0   2604  1688 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
> hasi      133779  0.0  0.0   2604  1596 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
> hasi      133786  0.0  0.0   2604  1880 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> hasi      133796  0.0  0.0   2604  1716 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> hasi      133803  0.0  0.0   2604  1872 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> hasi      133812  0.0  0.0   2604  1916 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> hasi      133819  0.0  0.0   2604  1900 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> hasi      133830  0.0  0.0   2604  1772 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
> root      133835  0.0  0.0      0     0 ?        D    12:57   0:00 [kworker/u64:127+flush-btrfs-1]
> hasi      133842  0.0  0.0   2776  1952 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
> hasi      133851  0.0  0.0   2604  1872 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
> hasi      133863  0.0  0.0   2604  2072 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> hasi      133874  0.0  0.0   2604  1916 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> hasi      133882  0.0  0.0   2604  1804 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> hasi      133889  0.0  0.0   2604  1732 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> hasi      133896  0.0  0.0   2604  1752 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> hasi      133903  0.0  0.0   2604  1700 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
> root      133917  0.0  0.0      0     0 ?        D    13:01   0:00 [kworker/u64:128+flush-btrfs-1]
> hasi      133925  0.0  0.0   2604  1752 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
> hasi      133935  0.0  0.0   2604  1728 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
> hasi      133942  0.0  0.0   2604  1688 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> hasi      133950  0.0  0.0   2604  1716 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> hasi      133957  0.0  0.0   2604  2016 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> hasi      133970  0.0  0.0   2604  1700 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> hasi      133977  0.0  0.0   2604  1916 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> hasi      133986  0.0  0.0   2604  1804 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
> root      133991  0.0  0.0      0     0 ?        D    13:02   0:00 [kworker/u64:129+flush-btrfs-1]
> hasi      133998  0.0  0.0   2604  1776 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
> hasi      134005  0.0  0.0   2604  1704 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
> hasi      134017  0.0  0.0   2604  1776 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> hasi      134027  0.0  0.0   4408  2808 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> hasi      134030  0.0  0.0   2604  1704 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> hasi      134047  0.0  0.0   2604  1700 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> hasi      134058  0.0  0.0   2604  1916 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> hasi      134065  0.0  0.0   2604  1752 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
> root      134073  0.0  0.0      0     0 ?        D    13:06   0:00 [kworker/u64:130+flush-btrfs-1]
> hasi      134081  0.0  0.0   2604  1916 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
> hasi      134091  0.0  0.0   2604  1688 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
> hasi      134098  0.0  0.0   2604  1704 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> hasi      134106  0.0  0.0   2604  1864 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> hasi      134113  0.0  0.0   2604  1776 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> hasi      134122  0.0  0.0   2604  1728 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> hasi      134129  0.0  0.0   2604  2000 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> root      134132  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:131+flush-btrfs-1]
> hasi      134142  0.0  0.0   2604  1700 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
> root      134147  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:135+flush-btrfs-1]
> hasi      134154  0.0  0.0   2604  1972 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
> hasi      134162  0.0  0.0   2604  1932 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
> hasi      134180  0.0  0.0   2604  1772 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134187  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134199  0.0  0.0   2604  1872 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134202  0.0  0.0   2604  1732 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134209  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134216  0.0  0.0   2604  1864 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
> hasi      134230  0.0  0.0   2604  1804 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
> hasi      134240  0.0  0.0   2604  1704 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
> hasi      134247  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134255  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134262  0.0  0.0   2604  1904 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134273  0.0  0.0   2604  1704 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134282  0.0  0.0   2604  1872 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134289  0.0  0.0   2604  1728 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
> hasi      134300  0.0  0.0   2604  1804 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
> hasi      134307  0.0  0.0   2604  1944 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
> hasi      134318  0.0  0.0   2604  1732 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> hasi      134325  0.0  0.0   2604  1596 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> hasi      134338  0.0  0.0   2604  1688 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> hasi      134340  0.0  0.0   2604  2016 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> hasi      134356  0.0  0.0   2604  1872 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> hasi      134363  0.0  0.0   2604  1916 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
> root      134367  0.0  0.0      0     0 ?        D    13:15   0:00 [kworker/u64:136+flush-btrfs-1]
> hasi      134381  0.0  0.0   2604  1732 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
> hasi      134391  0.0  0.0   2604  1772 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
> hasi      134398  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134406  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134413  0.0  0.0   2604  1688 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134422  0.0  0.0   2604  1916 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134431  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134438  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
> hasi      134449  0.0  0.0   2604  1704 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
> hasi      134461  0.0  0.0   2604  1900 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
> hasi      134486  0.0  0.0   2604  1872 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> root      134487  0.0  0.0      0     0 ?        D    13:19   0:00 [kworker/u64:138+flush-btrfs-1]
> hasi      134494  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> hasi      134502  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> hasi      134509  0.0  0.0   2604  1864 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> hasi      134516  0.0  0.0   2604  1752 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> hasi      134523  0.0  0.0   2604  1688 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
> hasi      134537  0.0  0.0   2604  1772 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
> hasi      134547  0.0  0.0   2604  1944 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
> hasi      134554  0.0  0.0   2604  1732 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134562  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134569  0.0  0.0   2604  1596 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134578  0.0  0.0   2604  1772 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134587  0.0  0.0   2604  1916 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134594  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
> hasi      134605  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
> hasi      134612  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
> hasi      134623  0.0  0.0   2604  1864 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> hasi      134630  0.0  0.0   2604  1716 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> hasi      134638  0.0  0.0   2604  1732 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> hasi      134645  0.0  0.0   2604  1900 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> hasi      134653  0.0  0.0   2604  1804 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> hasi      134660  0.0  0.0   2604  1688 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
> root      134675  0.0  0.0      0     0 ?        D    13:26   0:00 [kworker/u64:139+flush-btrfs-1]
> hasi      134683  0.0  0.0   2604  1732 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
> hasi      134693  0.0  0.0   2604  1700 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
> hasi      134700  0.0  0.0   2604  1716 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> hasi      134708  0.0  0.0   2604  1688 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> hasi      134715  0.0  0.0   2604  1864 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> hasi      134724  0.0  0.0   2604  1596 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> hasi      134733  0.0  0.0   2604  1776 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> hasi      134740  0.0  0.0   2604  1728 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
> root      134745  0.0  0.0      0     0 ?        D    13:28   0:00 [kworker/u64:142+flush-btrfs-1]
> hasi      134752  0.0  0.0   2604  1688 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
> hasi      134759  0.0  0.0   2604  1904 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
> hasi      134771  0.0  0.0   2604  1932 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> hasi      134784  0.0  0.0   2604  1772 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> hasi      134792  0.0  0.0   2604  1916 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> hasi      134799  0.0  0.0   2604  1700 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> hasi      134806  0.0  0.0   2604  1716 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> hasi      134813  0.0  0.0   2604  1596 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
> root      134814  0.0  0.0      0     0 ?        D    13:29   0:00 [kworker/u64:143+flush-btrfs-1]
> hasi      134828  0.0  0.0   2604  1776 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
> hasi      134838  0.0  0.0   2604  1916 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
> hasi      134848  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> hasi      134856  0.0  0.0   2604  1804 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> hasi      134863  0.0  0.0   2604  1872 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> hasi      134872  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> hasi      134881  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> hasi      134888  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
> root      134893  0.0  0.0      0     0 ?        D    13:33   0:00 [kworker/u64:145+flush-btrfs-1]
> hasi      134900  0.0  0.0   2604  1872 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
> hasi      134911  0.0  0.0   2604  1700 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
> hasi      134923  0.0  0.0   2604  1728 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134930  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134938  0.0  0.0   2604  1880 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134946  0.0  0.0   2604  1732 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134953  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134960  0.0  0.0   2604  1900 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
> hasi      134981  0.0  0.0   2604  1716 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
> hasi      134991  0.0  0.0   2604  1804 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
> hasi      134999  0.0  0.0   2604  1864 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> hasi      135007  0.0  0.0   2604  1944 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> hasi      135014  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> hasi      135023  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> hasi      135032  0.0  0.0   2604  1596 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> hasi      135039  0.0  0.0   2604  1700 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
> root      135065  0.0  0.0      0     0 ?        D    13:38   0:00 [kworker/u64:146+flush-btrfs-1]
> hasi      135072  0.0  0.0   2604  1596 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
> hasi      135085  0.0  0.0   2604  1916 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
> hasi      135097  0.0  0.0   2604  1972 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> hasi      135105  0.0  0.0   2604  1716 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> hasi      135113  0.0  0.0   2604  1700 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> hasi      135120  0.0  0.0   2604  1900 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> hasi      135133  0.0  0.0   4308  2940 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> hasi      135140  0.0  0.0   2604  1960 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
> root      135161  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:148+flush-btrfs-1]
> hasi      135169  0.0  0.0   2604  1728 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
> root      135170  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:149+flush-btrfs-1]
> hasi      135183  0.0  0.0   2604  1732 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
> hasi      135191  0.0  0.0   2604  1704 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> hasi      135199  0.0  0.0   2604  1804 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> hasi      135206  0.0  0.0   2604  1700 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> hasi      135215  0.0  0.0   2604  1752 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> hasi      135224  0.0  0.0   2604  1864 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> hasi      135231  0.0  0.0   2604  1688 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
> root      135234  0.0  0.0      0     0 ?        D    13:43   0:00 [kworker/u64:150+flush-btrfs-1]
> hasi      135241  0.0  0.0   2604  1688 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
> hasi      135250  0.0  0.0   2604  1732 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
> hasi      135258  0.0  0.0   2604  1776 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135265  0.0  0.0   2604  1716 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135273  0.0  0.0   2604  1596 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135280  0.0  0.0   2604  1732 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135287  0.0  0.0   2604  1916 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135294  0.0  0.0   2604  1864 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
> hasi      135308  0.0  0.0   2604  1732 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
> hasi      135316  0.0  0.0   2604  1596 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
> hasi      135323  0.0  0.0   2604  1716 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135337  0.0  0.0   2604  1932 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135345  0.0  0.0   2604  1944 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135355  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135364  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135371  0.0  0.0   2604  1864 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
> hasi      135385  0.0  0.0   2604  1704 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
> hasi      135395  0.0  0.0   2604  1752 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
> hasi      135403  0.0  0.0   2604  1752 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135410  0.0  0.0   2604  1864 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135418  0.0  0.0   2604  1700 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135425  0.0  0.0   2604  1944 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135432  0.0  0.0   2676  1844 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135439  0.0  0.0   2604  1776 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
> hasi      135456  0.0  0.0   2604  1816 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
> hasi      135468  0.0  0.0   2604  1716 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
> hasi      135475  0.0  0.0   2604  1916 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135483  0.0  0.0   2604  1776 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135490  0.0  0.0   2604  2072 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135500  0.0  0.0   2604  1872 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135509  0.0  0.0   2604  1864 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135517  0.0  0.0   2604  1716 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
> hasi      135527  0.0  0.0   2604  1704 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
> hasi      135542  0.0  0.0   2604  1596 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
> hasi      135551  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135559  0.0  0.0   2604  1732 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135567  0.0  0.0   2604  1916 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135574  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135582  0.0  0.0   2604  1728 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135589  0.0  0.0   2604  1772 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
> hasi      135602  0.0  0.0   2604  1700 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
> hasi      135610  0.0  0.0   2604  1992 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
> root      135614  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:151+flush-btrfs-1]
> hasi      135619  0.0  0.0   2604  1688 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> hasi      135627  0.0  0.0   2604  2000 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> hasi      135637  0.0  0.0   2604  1596 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> hasi      135647  0.0  0.0   2604  1992 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> root      135652  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:152+flush-btrfs-1]
> hasi      135659  0.0  0.0   2604  2016 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> hasi      135667  0.0  0.0   2604  1900 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
> hasi      135679  0.0  0.0   2604  1972 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
> hasi      135688  0.0  0.0   2604  1916 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
> root      135689  0.0  0.0      0     0 ?        D    13:59   0:00 [kworker/u64:153+flush-btrfs-1]
> hasi      135697  0.0  0.0   2604  1992 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> hasi      135707  0.0  0.0   2604  1804 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> hasi      135716  0.0  0.0   2604  1716 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> hasi      135723  0.0  0.0   2604  1880 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> hasi      135731  0.0  0.0   2604  1772 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> hasi      135738  0.0  0.0   2604  1916 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
> root      135754  0.0  0.0      0     0 ?        D    14:01   0:00 [kworker/u64:154+flush-btrfs-1]
> hasi      135763  0.0  0.0   2604  1944 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
> hasi      135771  0.0  0.0   2604  1716 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
> hasi      135778  0.0  0.0   2604  1776 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135789  0.0  0.0   2604  1872 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135796  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135805  0.0  0.0   2604  1716 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135814  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135821  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
> hasi      135831  0.0  0.0   2604  1944 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
> hasi      135846  0.0  0.0   2604  1960 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
> hasi      135857  0.0  0.0   2604  1704 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135864  0.0  0.0   2604  1904 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135878  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135885  0.0  0.0   2604  1716 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135892  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135899  0.0  0.0   2604  1596 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
> hasi      135910  0.0  0.0   2604  1688 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
> hasi      135918  0.0  0.0   2604  1728 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
> hasi      135926  0.0  0.0   2604  1732 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> hasi      135935  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> root      135936  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:155+flush-btrfs-1]
> hasi      135943  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> hasi      135952  0.0  0.0   2604  1728 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> hasi      135961  0.0  0.0   2604  1864 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> hasi      135968  0.0  0.0   2604  2044 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
> root      135969  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:156+flush-btrfs-1]
> hasi      136004  0.0  0.0   2604  1908 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
> root      136011  0.0  0.0      0     0 ?        D    14:08   0:00 [kworker/u64:157+flush-btrfs-1]
> hasi      136019  0.0  0.0   2604  1732 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
> hasi      136027  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136035  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136043  0.0  0.0   2604  1864 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136050  0.0  0.0   2604  1772 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136057  0.0  0.0   2604  1776 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136064  0.0  0.0   2604  1944 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
> hasi      136075  0.0  0.0   2604  1772 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
> hasi      136084  0.0  0.0   2604  1944 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
> hasi      136092  0.0  0.0   2604  1700 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136100  0.0  0.0   2604  1944 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136107  0.0  0.0   2604  1752 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136116  0.0  0.0   2604  1716 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136125  0.0  0.0   2604  1804 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136132  0.0  0.0   2604  1772 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
> hasi      136143  0.0  0.0   2604  1940 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
> hasi      136153  0.0  0.0   2604  1864 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
> hasi      136162  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136170  0.0  0.0   2604  1716 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136178  0.0  0.0   2604  2000 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136187  0.0  0.0   2604  1688 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136194  0.0  0.0   2604  1972 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136207  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
> hasi      136223  0.0  0.0   2604  1916 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
> hasi      136231  0.0  0.0   2604  1776 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
> root      136232  0.0  0.0      0     0 ?        D    14:17   0:00 [kworker/u64:159+flush-btrfs-1]
> hasi      136239  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136247  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136254  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136263  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136272  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136279  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
> hasi      136290  0.0  0.0   2604  1596 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
> hasi      136299  0.0  0.0   2604  1716 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
> hasi      136312  0.0  0.0   2604  1860 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136323  0.0  0.0   2604  1716 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136331  0.0  0.0   2604  1700 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136338  0.0  0.0   2604  1900 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136352  0.0  0.0   2604  1704 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136359  0.0  0.0   2604  1728 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
> hasi      136375  0.0  0.0   2604  1716 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
> hasi      136384  0.0  0.0   2604  1752 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
> hasi      136391  0.0  0.0   2604  1772 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> hasi      136399  0.0  0.0   2604  1700 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> hasi      136406  0.0  0.0   2604  1732 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> hasi      136415  0.0  0.0   2604  1716 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> hasi      136424  0.0  0.0   2604  1804 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> hasi      136431  0.0  0.0   2604  1728 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
> root      136434  0.0  0.0      0     0 ?        D    14:23   0:00 [kworker/u64:160+flush-btrfs-1]
> hasi      136442  0.0  0.0   2604  1704 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
> hasi      136450  0.0  0.0   2604  1716 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
> hasi      136458  0.0  0.0   2604  1872 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136465  0.0  0.0   2604  1752 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136473  0.0  0.0   2604  1864 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136480  0.0  0.0   2604  1700 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136487  0.0  0.0   2604  1908 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136495  0.0  0.0   2604  1944 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
> hasi      136507  0.0  0.0   2604  1688 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
> hasi      136516  0.0  0.0   2604  1908 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
> hasi      136526  0.0  0.0   2604  2044 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136537  0.0  0.0   2604  1880 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136545  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136554  0.0  0.0   2604  1960 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136569  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136577  0.0  0.0   2604  1716 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
> hasi      136587  0.0  0.0   2604  1864 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
> hasi      136596  0.0  0.0   2604  1700 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
> hasi      136604  0.0  0.0   2604  1864 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> hasi      136611  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> hasi      136619  0.0  0.0   2604  1728 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> hasi      136626  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> hasi      136633  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> hasi      136640  0.0  0.0   2604  1904 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
> root      136644  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:161+flush-btrfs-1]
> root      136646  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:162+flush-btrfs-1]
> hasi      136657  0.0  0.0   2604  2016 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
> hasi      136668  0.0  0.0   2604  1704 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
> hasi      136678  0.0  0.0   2604  1712 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136689  0.0  0.0   2604  1804 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136692  0.0  0.0   2604  1972 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136703  0.0  0.0   2604  1752 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136712  0.0  0.0   2604  1688 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136719  0.0  0.0   2604  1772 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
> hasi      136729  0.0  0.0   2604  1776 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
> hasi      136740  0.0  0.0   2604  1972 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
> hasi      136751  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136758  0.0  0.0   2604  1804 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136766  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136773  0.0  0.0   2604  1872 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136780  0.0  0.0   2604  1860 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136787  0.0  0.0   2604  1688 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
> hasi      136799  0.0  0.0   2604  1772 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
> hasi      136807  0.0  0.0   2604  1596 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
> hasi      136815  0.0  0.0   2604  1972 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> hasi      136827  0.0  0.0   2604  1900 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> hasi      136836  0.0  0.0   2604  1916 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> hasi      136845  0.0  0.0   2604  1716 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> hasi      136855  0.0  0.0   2604  1776 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> hasi      136862  0.0  0.0   2604  1772 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
> root      136866  0.0  0.0      0     0 ?        D    14:38   0:00 [kworker/u64:163+flush-btrfs-1]
> hasi      136874  0.0  0.0   2604  1900 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
> hasi      136906  0.0  0.0   2604  1776 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
> hasi      136914  0.0  0.0   2604  1732 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> root      136915  0.0  0.0      0     0 ?        D    14:39   0:00 [kworker/u64:164+flush-btrfs-1]
> hasi      136922  0.0  0.0   2604  1752 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> hasi      136930  0.0  0.0   2604  1688 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> hasi      136941  0.0  0.0   2604  1596 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> hasi      136948  0.0  0.0   2604  1916 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> hasi      136955  0.0  0.0   2604  1700 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
> hasi      136975  0.0  0.0   2604  1728 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
> hasi      136984  0.0  0.0   2604  1688 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
> hasi      136993  0.0  0.0   2604  1700 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> root      136994  0.0  0.0      0     0 ?        D    14:42   0:00 [kworker/u64:165+flush-btrfs-1]
> hasi      137001  0.0  0.0   2604  1872 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> hasi      137008  0.0  0.0   2604  1972 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> hasi      137019  0.0  0.0   2604  1716 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> hasi      137028  0.0  0.0   2604  1864 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> hasi      137035  0.0  0.0   2604  1704 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
> hasi      137045  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
> hasi      137053  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
> hasi      137061  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> hasi      137071  0.0  0.0   2604  1688 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> hasi      137079  0.0  0.0   2604  1900 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> root      137081  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:166+flush-btrfs-1]
> root      137082  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:167+flush-btrfs-1]
> hasi      137089  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> hasi      137096  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> hasi      137103  0.0  0.0   2604  1700 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
> hasi      137115  0.0  0.0   2604  1772 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
> hasi      137123  0.0  0.0   2604  1904 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
> hasi      137133  0.0  0.0   2604  1772 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137140  0.0  0.0   2604  1596 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137147  0.0  0.0   2604  1732 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137156  0.0  0.0   2604  1704 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137166  0.0  0.0   2604  1700 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137175  0.0  0.0   2604  1776 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
> hasi      137188  0.0  0.0   2604  1804 ?        Ds   14:48   0:00 /usr/lib/ssh/sftp-server
> hasi      137200  0.0  0.0   2604  2044 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137207  0.0  0.0   2604  1944 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137214  0.0  0.0   2604  1732 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137222  0.0  0.0   2604  1872 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137229  0.0  0.0   2604  1960 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137239  0.0  0.0   2604  1716 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137246  0.0  0.0   2604  1688 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
> hasi      137257  0.0  0.0   2604  1704 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
> root      137258  0.0  0.0      0     0 ?        D    14:51   0:00 [kworker/u64:168+flush-btrfs-1]
> hasi      137266  0.0  0.0   2604  1864 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
> hasi      137274  0.0  0.0   2604  1700 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137281  0.0  0.0   2604  1688 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137288  0.0  0.0   2604  1716 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137297  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137307  0.0  0.0   2604  1804 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137314  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
> hasi      137325  0.0  0.0   2604  1716 ?        Ds   14:53   0:00 /usr/lib/ssh/sftp-server
> hasi      137335  0.0  0.0   2604  1900 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137344  0.0  0.0   2604  1772 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137351  0.0  0.0   2604  1944 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137359  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137366  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137373  0.0  0.0   2604  1960 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137381  0.0  0.0   2604  1872 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
> hasi      137398  0.0  0.0   2604  1732 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
> hasi      137407  0.0  0.0   2604  1716 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
> hasi      137415  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137422  0.0  0.0   2604  1772 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137429  0.0  0.0   2604  1704 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137438  0.0  0.0   2604  1776 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137448  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137455  0.0  0.0   2604  1732 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
> hasi      137465  0.0  0.0   2604  1752 ?        Ds   14:58   0:00 /usr/lib/ssh/sftp-server
> hasi      137479  0.0  0.0   2604  1752 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137486  0.0  0.0   2604  1804 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137493  0.0  0.0   2604  1864 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137501  0.0  0.0   2604  1916 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137508  0.0  0.0   2604  1772 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137515  0.0  0.0   2604  2072 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
> hasi      137530  0.0  0.0   2604  1716 ?        Ds   15:00   0:00 /usr/lib/ssh/sftp-server
> root      137531  0.0  0.0      0     0 ?        D    15:00   0:00 [kworker/u64:171+flush-btrfs-1]
> hasi      137547  0.0  0.0   2604  1864 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
> hasi      137557  0.0  0.0   2604  1804 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
> hasi      137566  0.0  0.0   2604  1716 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> hasi      137573  0.0  0.0   2604  1772 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> hasi      137580  0.0  0.0   2604  1732 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> hasi      137589  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> hasi      137598  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> hasi      137605  0.0  0.0   2604  1700 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
> root      137609  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:172+flush-btrfs-1]
> root      137611  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:173+flush-btrfs-1]
> hasi      137625  0.0  0.0   2604  1816 ?        Ds   15:03   0:00 /usr/lib/ssh/sftp-server
> hasi      137640  0.0  0.0   2604  1688 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137647  0.0  0.0   2604  2044 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137656  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137664  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137671  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137678  0.0  0.0   2604  1916 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
> hasi      137688  0.0  0.0   2604  1772 ?        Ds   15:05   0:00 /usr/lib/ssh/sftp-server
> hasi      137699  0.0  0.0   2604  1704 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
> hasi      137709  0.0  0.0   2604  1728 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
> hasi      137717  0.0  0.0   2604  1776 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137724  0.0  0.0   2604  2016 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137732  0.0  0.0   2604  1752 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137739  0.0  0.0   2604  1916 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137748  0.0  0.0   2604  1864 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137755  0.0  0.0   2604  1704 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
> hasi      137766  0.0  0.0   2604  1688 ?        Ds   15:08   0:00 /usr/lib/ssh/sftp-server
> hasi      137778  0.0  0.0   2604  1704 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137785  0.0  0.0   2604  1772 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137792  0.0  0.0   2604  1700 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137800  0.0  0.0   2604  1596 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137807  0.0  0.0   2604  1804 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137816  0.0  0.0   2604  2044 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
> hasi      137826  0.0  0.0   2604  1776 ?        Ds   15:10   0:00 /usr/lib/ssh/sftp-server
> hasi      137844  0.0  0.0   2604  1816 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
> hasi      137856  0.0  0.0   2604  1844 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
> hasi      137865  0.0  0.0   2604  1732 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137872  0.0  0.0   2604  1772 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137879  0.0  0.0   2604  1752 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137886  0.0  0.0   2604  1700 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137895  0.0  0.0   2604  1596 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137902  0.0  0.0   2604  1944 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
> hasi      137914  0.0  0.0   2604  1752 ?        Ds   15:13   0:00 /usr/lib/ssh/sftp-server
> hasi      137925  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137932  0.0  0.0   2604  1700 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137939  0.0  0.0   2604  1704 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137947  0.0  0.0   2604  1960 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137956  0.0  0.0   2604  1732 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137963  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
> hasi      137977  0.0  0.0   2604  1716 ?        Ds   15:15   0:00 /usr/lib/ssh/sftp-server
> hasi      137992  0.0  0.0   2604  1716 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
> hasi      138002  0.0  0.0   2604  1872 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
> hasi      138011  0.0  0.0   2604  1716 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138018  0.0  0.0   2604  1872 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138025  0.0  0.0   2604  1804 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138046  0.0  0.0   2604  1880 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138057  0.0  0.0   2604  1988 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138065  0.0  0.0   2604  1844 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
> hasi      138088  0.0  0.0   2604  1864 ?        Ds   15:18   0:00 /usr/lib/ssh/sftp-server
> hasi      138099  0.0  0.0   2604  1752 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138106  0.0  0.0   2604  1916 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138113  0.0  0.0   2604  1704 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138121  0.0  0.0   2604  1872 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138132  0.0  0.0   2604  1776 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138139  0.0  0.0   2604  1688 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
> hasi      138147  0.0  0.0   2604  1732 ?        Ds   15:20   0:00 /usr/lib/ssh/sftp-server
> hasi      138162  0.0  0.0   2604  1880 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
> hasi      138173  0.0  0.0   2604  1944 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
> hasi      138182  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138189  0.0  0.0   2604  1716 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138196  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138203  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138212  0.0  0.0   2604  1872 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138219  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
> hasi      138229  0.0  0.0   2604  1716 ?        Ds   15:23   0:00 /usr/lib/ssh/sftp-server
> hasi      138243  0.0  0.0   2604  1944 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138250  0.0  0.0   2604  1728 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138257  0.0  0.0   2604  1900 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138267  0.0  0.0   2604  1904 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138276  0.0  0.0   2604  1688 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138283  0.0  0.0   2604  1700 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
> hasi      138293  0.0  0.0   2604  1816 ?        Ds   15:25   0:00 /usr/lib/ssh/sftp-server
> hasi      138305  0.0  0.0   2604  1596 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
> hasi      138315  0.0  0.0   2604  1688 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
> hasi      138326  0.0  0.0   2604  1728 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138333  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138340  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138347  0.0  0.0   2604  1596 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138356  0.0  0.0   2604  1776 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138363  0.0  0.0   2604  1688 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
> hasi      138373  0.0  0.0   2604  1716 ?        Ds   15:28   0:00 /usr/lib/ssh/sftp-server
> hasi      138384  0.0  0.0   2604  1688 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138391  0.0  0.0   2604  1596 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138398  0.0  0.0   2604  1728 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138406  0.0  0.0   2604  1716 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138413  0.0  0.0   2604  1872 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138423  0.0  0.0   2604  1916 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
> hasi      138433  0.0  0.0   2604  1804 ?        Ds   15:30   0:00 /usr/lib/ssh/sftp-server
> hasi      138455  0.0  0.0   2604  1804 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
> hasi      138470  0.0  0.0   2604  2072 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
> root      138974  0.0  0.0   3936  2080 pts/2    S+   16:32   0:00 grep         D
> [root@coldnas ~]# uname -a
> Linux coldnas 6.10.5-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 x86_64 GNU/Linux


>> Am 08.08.2024 um 16:23 schrieb John Stoffel <john@stoffel.org>:
>> 
>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>> 
>>> Hi,
>>>> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
>>>> 
>>>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>>>> 
>>>> 
>>>> 
>>>>> i had some more time at hand and managent to compile 5.15.164. The
>>>>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>>>>> reproduce this on a newer supported kernel if I can.
>>>> 
>>>> Supported by who?   NixOS?  Why don't you just install linux kernel
>>>> 6.6.x and see of the problem is still there?  5.15.x is ancient and
>>>> un-supported upstream now.  
>> 
>>> I did just that. However, 5.15 “un-supported” by upstream is
>>> confusing me. It’s an official LTS kernel with an EOL of December
>>> 2026.
>> 
>> To quote the kernel.org:
>> 
>> Longterm
>> 
>> There are usually several "longterm maintenance" kernel releases
>> provided for the purposes of backporting bugfixes for older kernel
>> trees. Only important bugfixes are applied to such kernels and they
>> don't usually see very frequent releases, especially for older trees.
>> 
>> So when we run into people having problems with LTS kernels, the first
>> thing we ask is for them to run the most recent kernels, because
>> that's where the bug fixing happens.  
>> 
>> In any case, there have been some bugs in recent RAID5/RAID6 setups,
>> so going to a recent kernel will help track these down. 
>> 
>> 
>>> Also, I’d like to note that NixOS kernels tend to be very close to
>>> upstream. The only patches that I can see are involved here are
>>> those that patch out some hard coded references to user space paths:
>> 
>> Not in this case, kernel 5.15 is ancient, you should be running 6.9.x
>> or even newer for debugging issues like this. 
>> 
>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch
>> 
>>> Kernel is now:
>> 
>>> Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux
>> 
>> 
>>> The issue is still there on 6.10.3 and now looks like shown below.
>> 
>> Great!  Thanks for doing this change, this will let the developers
>> help you more easily. 
>> 
>>> I’m aware that this is output that shows symptoms and not
>>> (necessarily) the cause. I’m currently a bit out of ideas where to
>>> look for more information and would appreciate any pointers. My
>>> suspicion is an interaction problem triggered by the use of NVMe in
>>> combination with other systems (xfs, dm-crypt and raid are the ones
>>> I’m aware of playign a role).
>> 
>>> The use of NVMe itself likely isn’t the issue (we’ve been using NVMe
>>> on similar hosts and also in combination with dm-crypt with this
>>> kernel for a while now) and I could imagine that it triggers a race
>>> condition due to the higher performance - although the specific
>>> performance parameters aren't *that* high. Right before the lockup I
>>> see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe
>>> with dm-crypt but not with raid before.
>> 
>>> I can perform debugging on that machine as needed, but googling for
>>> any combination of hung tasks related to nvme/xfs/crypt/raid only
>>> ends up showing me generic performance concerns from forum, an
>>> unrelated xfs issue mentioned by redhat and the list archive entry
>>> from this post.
>> 
>> Can you try setting up some loop devices in the same type of
>> configuration, and seeing if you can replicate the issue that way?
>> Let's try to get the nvme stuff out of the way to see if this can be
>> replicated more easily.  
>> 
>> 
>>> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
>>> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
>>> [ 7497.040979] Call Trace:
>>> [ 7497.040981]  <TASK>
>>> [ 7497.040987]  __schedule+0x3fa/0x1550
>>> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
>>> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
>>> [ 7497.041168]  schedule+0x27/0xf0
>>> [ 7497.041171]  io_schedule+0x46/0x70
>>> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
>>> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
>>> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
>>> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
>>> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
>>> [ 7497.041207]  evict+0x1b0/0x1d0
>>> [ 7497.041212]  __dentry_kill+0x6e/0x170
>>> [ 7497.041216]  dput+0xe5/0x1b0
>>> [ 7497.041218]  do_renameat2+0x386/0x600
>>> [ 7497.041226]  __x64_sys_rename+0x43/0x50
>>> [ 7497.041229]  do_syscall_64+0xb7/0x200
>>> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>>> [ 7497.041236] RIP: 0033:0x7f4be586f75b
>>> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
>>> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
>>> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
>>> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
>>> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
>>> [ 7497.041277]  </TASK>
>>> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
>>> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
>>> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.063140] Call Trace:
>>> [ 7497.063141]  <TASK>
>>> [ 7497.063145]  __schedule+0x3fa/0x1550
>>> [ 7497.063154]  schedule+0x27/0xf0
>>> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.063184]  ? bio_split_rw+0x193/0x260
>>> [ 7497.063190]  md_handle_request+0x153/0x270
>>> [ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.063198]  __submit_bio+0x190/0x240
>>> [ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.063214]  process_one_work+0x18f/0x3b0
>>> [ 7497.063219]  worker_thread+0x233/0x340
>>> [ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.063225]  kthread+0xcd/0x100
>>> [ 7497.063228]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.063230]  ret_from_fork+0x31/0x50
>>> [ 7497.063234]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.063236]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.063243]  </TASK>
>>> [ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
>>> [ 7497.071367]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
>>> [ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.085086] Call Trace:
>>> [ 7497.085087]  <TASK>
>>> [ 7497.085089]  __schedule+0x3fa/0x1550
>>> [ 7497.085094]  schedule+0x27/0xf0
>>> [ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.085116]  ? bio_split_rw+0x193/0x260
>>> [ 7497.085120]  md_handle_request+0x153/0x270
>>> [ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.085125]  __submit_bio+0x190/0x240
>>> [ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.085138]  process_one_work+0x18f/0x3b0
>>> [ 7497.085142]  worker_thread+0x233/0x340
>>> [ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.085150]  kthread+0xcd/0x100
>>> [ 7497.085152]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.085155]  ret_from_fork+0x31/0x50
>>> [ 7497.085157]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.085159]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.085164]  </TASK>
>>> [ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
>>> [ 7497.093282]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
>>> [ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.106998] Call Trace:
>>> [ 7497.106999]  <TASK>
>>> [ 7497.107001]  __schedule+0x3fa/0x1550
>>> [ 7497.107006]  schedule+0x27/0xf0
>>> [ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.107028]  ? bio_split_rw+0x193/0x260
>>> [ 7497.107033]  md_handle_request+0x153/0x270
>>> [ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.107039]  __submit_bio+0x190/0x240
>>> [ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.107052]  process_one_work+0x18f/0x3b0
>>> [ 7497.107055]  worker_thread+0x233/0x340
>>> [ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.107063]  kthread+0xcd/0x100
>>> [ 7497.107065]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.107067]  ret_from_fork+0x31/0x50
>>> [ 7497.107069]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.107071]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.107081]  </TASK>
>>> [ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
>>> [ 7497.114327]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
>>> [ 7497.128024] Call Trace:
>>> [ 7497.128025]  <TASK>
>>> [ 7497.128027]  __schedule+0x3fa/0x1550
>>> [ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.128034]  schedule+0x27/0xf0
>>> [ 7497.128036]  schedule_timeout+0x15d/0x170
>>> [ 7497.128040]  __down_common+0x119/0x220
>>> [ 7497.128045]  down+0x47/0x60
>>> [ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
>>> [ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
>>> [ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
>>> [ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
>>> [ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
>>> [ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>>> [ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
>>> [ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
>>> [ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
>>> [ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
>>> [ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
>>> [ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
>>> [ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
>>> [ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
>>> [ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
>>> [ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
>>> [ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.129003]  ? get_cached_acl+0x4c/0x90
>>> [ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
>>> [ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.129065]  path_openat+0xf82/0x1240
>>> [ 7497.129072]  do_filp_open+0xc4/0x170
>>> [ 7497.129084]  do_sys_openat2+0xab/0xe0
>>> [ 7497.129090]  __x64_sys_openat+0x57/0xa0
>>> [ 7497.129093]  do_syscall_64+0xb7/0x200
>>> [ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>>> [ 7497.129099] RIP: 0033:0x7f6809d2be2f
>>> [ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
>>> [ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
>>> [ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
>>> [ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
>>> [ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
>>> [ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
>>> [ 7497.129133]  </TASK>
>>> [ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
>>> [ 7497.137277]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
>>> [ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
>>> [ 7497.150993] Call Trace:
>>> [ 7497.150995]  <TASK>
>>> [ 7497.150998]  __schedule+0x3fa/0x1550
>>> [ 7497.151007]  schedule+0x27/0xf0
>>> [ 7497.151009]  schedule_timeout+0x15d/0x170
>>> [ 7497.151013]  __wait_for_common+0x90/0x1c0
>>> [ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
>>> [ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
>>> [ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
>>> [ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
>>> [ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>> [ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>>> [ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>> [ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>> [ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
>>> [ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
>>> [ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
>>> [ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
>>> [ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>> [ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
>>> [ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
>>> [ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
>>> [ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
>>> [ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
>>> [ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
>>> [ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
>>> [ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
>>> [ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
>>> [ 7497.152228]  iomap_writepages+0x271/0x9b0
>>> [ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
>>> [ 7497.152287]  do_writepages+0x76/0x260
>>> [ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
>>> [ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.152300]  ? psi_group_change+0x213/0x3c0
>>> [ 7497.152305]  __writeback_single_inode+0x3d/0x350
>>> [ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
>>> [ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
>>> [ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.152328]  wb_writeback+0x193/0x310
>>> [ 7497.152332]  wb_workfn+0x357/0x450
>>> [ 7497.152337]  process_one_work+0x18f/0x3b0
>>> [ 7497.152342]  worker_thread+0x233/0x340
>>> [ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.152348]  kthread+0xcd/0x100
>>> [ 7497.152352]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.152354]  ret_from_fork+0x31/0x50
>>> [ 7497.152358]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.152360]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.152366]  </TASK>
>>> [ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
>>> [ 7497.160489]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
>>> [ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.174200] Call Trace:
>>> [ 7497.174201]  <TASK>
>>> [ 7497.174203]  __schedule+0x3fa/0x1550
>>> [ 7497.174208]  schedule+0x27/0xf0
>>> [ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.174235]  ? bio_split_rw+0x193/0x260
>>> [ 7497.174242]  md_handle_request+0x153/0x270
>>> [ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.174248]  __submit_bio+0x190/0x240
>>> [ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.174263]  process_one_work+0x18f/0x3b0
>>> [ 7497.174266]  worker_thread+0x233/0x340
>>> [ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.174271]  kthread+0xcd/0x100
>>> [ 7497.174273]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.174276]  ret_from_fork+0x31/0x50
>>> [ 7497.174277]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.174279]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.174285]  </TASK>
>>> [ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
>>> [ 7497.182499]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
>>> [ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
>>> [ 7497.196281] Call Trace:
>>> [ 7497.196282]  <TASK>
>>> [ 7497.196285]  __schedule+0x3fa/0x1550
>>> [ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.196293]  schedule+0x27/0xf0
>>> [ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
>>> [ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
>>> [ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
>>> [ 7497.196400]  xlog_write+0x310/0x470 [xfs]
>>> [ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
>>> [ 7497.196503]  process_one_work+0x18f/0x3b0
>>> [ 7497.196507]  worker_thread+0x233/0x340
>>> [ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.196515]  kthread+0xcd/0x100
>>> [ 7497.196517]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.196519]  ret_from_fork+0x31/0x50
>>> [ 7497.196522]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.196524]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.196529]  </TASK>
>>> [ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
>>> [ 7497.204648]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
>>> [ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.218359] Call Trace:
>>> [ 7497.218360]  <TASK>
>>> [ 7497.218363]  __schedule+0x3fa/0x1550
>>> [ 7497.218369]  schedule+0x27/0xf0
>>> [ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.218392]  ? bio_split_rw+0x193/0x260
>>> [ 7497.218398]  md_handle_request+0x153/0x270
>>> [ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.218405]  __submit_bio+0x190/0x240
>>> [ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.218419]  process_one_work+0x18f/0x3b0
>>> [ 7497.218423]  worker_thread+0x233/0x340
>>> [ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.218428]  kthread+0xcd/0x100
>>> [ 7497.218430]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.218433]  ret_from_fork+0x31/0x50
>>> [ 7497.218435]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.218437]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.218442]  </TASK>
>>> [ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
>>> [ 7497.226572]       Not tainted 6.10.3 #1-NixOS
>>> [ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
>>> [ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>> [ 7497.240287] Call Trace:
>>> [ 7497.240288]  <TASK>
>>> [ 7497.240290]  __schedule+0x3fa/0x1550
>>> [ 7497.240298]  schedule+0x27/0xf0
>>> [ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
>>> [ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
>>> [ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>> [ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
>>> [ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
>>> [ 7497.240322]  ? bio_split_rw+0x193/0x260
>>> [ 7497.240328]  md_handle_request+0x153/0x270
>>> [ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.240334]  __submit_bio+0x190/0x240
>>> [ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>> [ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
>>> [ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
>>> [ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>> [ 7497.240348]  process_one_work+0x18f/0x3b0
>>> [ 7497.240353]  worker_thread+0x233/0x340
>>> [ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
>>> [ 7497.240358]  kthread+0xcd/0x100
>>> [ 7497.240361]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.240364]  ret_from_fork+0x31/0x50
>>> [ 7497.240366]  ? __pfx_kthread+0x10/0x10
>>> [ 7497.240368]  ret_from_fork_asm+0x1a/0x30
>>> [ 7497.240375]  </TASK>
>>> [ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
>> 
>>>> 
>>>> 
>>>> 
>>>>> Kernel:
>>>> 
>>>>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
>>>> 
>>>>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
>>>> 
>>>>> Output:
>>>> 
>>>>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>>>>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>>>>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4549.860435] Call Trace:
>>>>> [ 4549.860437]  <TASK>
>>>>> [ 4549.860440]  __schedule+0x373/0x1580
>>>>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>>>>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>>>>> [ 4549.860453]  schedule+0x5b/0xe0
>>>>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4549.860459]  ? finish_wait+0x90/0x90
>>>>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>>>>> [ 4549.860490]  ? finish_wait+0x90/0x90
>>>>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>>>>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>>>>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>>>>> [ 4549.860502]  __submit_bio+0x18c/0x220
>>>>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4549.860517]  process_one_work+0x1d3/0x360
>>>>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>>>>> [ 4549.860523]  ? process_one_work+0x360/0x360
>>>>> [ 4549.860525]  kthread+0x115/0x140
>>>>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>>>>> [ 4549.860535]  </TASK>
>>>>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>>>>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>>>>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4549.882368] Call Trace:
>>>>> [ 4549.882369]  <TASK>
>>>>> [ 4549.882370]  __schedule+0x373/0x1580
>>>>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4549.882379]  schedule+0x5b/0xe0
>>>>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4549.882384]  ? finish_wait+0x90/0x90
>>>>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>>>>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.882403]  ? finish_wait+0x90/0x90
>>>>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>>>>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>>>>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>>>>> [ 4549.882415]  __submit_bio+0x18c/0x220
>>>>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4549.882428]  process_one_work+0x1d3/0x360
>>>>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>>>>> [ 4549.882433]  ? process_one_work+0x360/0x360
>>>>> [ 4549.882435]  kthread+0x115/0x140
>>>>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>>>>> [ 4549.882442]  </TASK>
>>>>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>>>>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>>>>> [ 4549.904411] Call Trace:
>>>>> [ 4549.904412]  <TASK>
>>>>> [ 4549.904414]  __schedule+0x373/0x1580
>>>>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>>>>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>>>>> [ 4549.904498]  schedule+0x5b/0xe0
>>>>> [ 4549.904500]  io_schedule+0x42/0x70
>>>>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>>>>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>>>>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>>>>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>>>>> [ 4549.904520]  evict+0x15f/0x180
>>>>> [ 4549.904524]  __dentry_kill+0xde/0x170
>>>>> [ 4549.904527]  dput+0x139/0x320
>>>>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>>>>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>>>>> [ 4549.904538]  do_syscall_64+0x34/0x80
>>>>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>>>>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>>>>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>>>>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>>>>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>>>>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>>>>> [ 4549.904555]  </TASK>
>>>>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>>>>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>>>>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4549.926380] Call Trace:
>>>>> [ 4549.926381]  <TASK>
>>>>> [ 4549.926383]  __schedule+0x373/0x1580
>>>>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4549.926392]  schedule+0x5b/0xe0
>>>>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4549.926397]  ? finish_wait+0x90/0x90
>>>>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>>>>> [ 4549.926413]  ? finish_wait+0x90/0x90
>>>>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>>>>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>>>>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>>>>> [ 4549.926424]  __submit_bio+0x18c/0x220
>>>>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4549.926437]  process_one_work+0x1d3/0x360
>>>>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>>>>> [ 4549.926442]  ? process_one_work+0x360/0x360
>>>>> [ 4549.926444]  kthread+0x115/0x140
>>>>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>>>>> [ 4549.926454]  </TASK>
>>>>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>>>>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>>>>> [ 4549.947503] Call Trace:
>>>>> [ 4549.947505]  <TASK>
>>>>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>>>>> [ 4549.947510]  __schedule+0x373/0x1580
>>>>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>>>>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>>>>> [ 4549.947521]  schedule+0x5b/0xe0
>>>>> [ 4549.947523]  schedule_timeout+0xff/0x130
>>>>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>>>>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>>>>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>>>>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>>>>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>>>>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>>>>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>>>>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>>>>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>>>>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>>>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>>>>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>>>>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>>>>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>>>>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>>>>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>>>>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>>>>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>>>>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>>>>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>>>>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>>>>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>>>>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>>>>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>>>>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>>>>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>>>>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>>>>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>>>>> [ 4549.948357]  do_syscall_64+0x34/0x80
>>>>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>>>>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>>>>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>>>>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>>>>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>>>>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>>>>> [ 4549.948373]  </TASK>
>>>>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>>>>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>>>>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4549.970205] Call Trace:
>>>>> [ 4549.970206]  <TASK>
>>>>> [ 4549.970209]  __schedule+0x373/0x1580
>>>>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.970215]  schedule+0x5b/0xe0
>>>>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4549.970219]  ? finish_wait+0x90/0x90
>>>>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>>>>> [ 4549.970244]  ? finish_wait+0x90/0x90
>>>>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>>>>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>>>>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>>>>> [ 4549.970254]  __submit_bio+0x18c/0x220
>>>>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4549.970267]  process_one_work+0x1d3/0x360
>>>>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>>>>> [ 4549.970272]  ? process_one_work+0x360/0x360
>>>>> [ 4549.970274]  kthread+0x115/0x140
>>>>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>>>>> [ 4549.970282]  </TASK>
>>>>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>>>>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>>>>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4549.992097] Call Trace:
>>>>> [ 4549.992098]  <TASK>
>>>>> [ 4549.992100]  __schedule+0x373/0x1580
>>>>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4549.992109]  schedule+0x5b/0xe0
>>>>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4549.992114]  ? finish_wait+0x90/0x90
>>>>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>>>>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>>>>> [ 4549.992135]  ? finish_wait+0x90/0x90
>>>>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>>>>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>>>>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>>>>> [ 4549.992144]  __submit_bio+0x18c/0x220
>>>>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4549.992157]  process_one_work+0x1d3/0x360
>>>>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>>>>> [ 4549.992162]  ? process_one_work+0x360/0x360
>>>>> [ 4549.992163]  kthread+0x115/0x140
>>>>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>>>>> [ 4549.992172]  </TASK>
>>>>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>>>>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>>>>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4550.013992] Call Trace:
>>>>> [ 4550.013993]  <TASK>
>>>>> [ 4550.013995]  __schedule+0x373/0x1580
>>>>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4550.014003]  schedule+0x5b/0xe0
>>>>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4550.014008]  ? finish_wait+0x90/0x90
>>>>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>>>>> [ 4550.014022]  ? finish_wait+0x90/0x90
>>>>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>>>>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>>>>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>>>>> [ 4550.014032]  __submit_bio+0x18c/0x220
>>>>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4550.014044]  process_one_work+0x1d3/0x360
>>>>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>>>>> [ 4550.014049]  ? process_one_work+0x360/0x360
>>>>> [ 4550.014050]  kthread+0x115/0x140
>>>>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>>>>> [ 4550.014058]  </TASK>
>>>>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>>>>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>>>>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4550.035887] Call Trace:
>>>>> [ 4550.035888]  <TASK>
>>>>> [ 4550.035890]  __schedule+0x373/0x1580
>>>>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4550.035899]  schedule+0x5b/0xe0
>>>>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4550.035904]  ? finish_wait+0x90/0x90
>>>>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>>>>> [ 4550.035919]  ? finish_wait+0x90/0x90
>>>>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>>>>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>>>>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>>>>> [ 4550.035929]  __submit_bio+0x18c/0x220
>>>>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4550.035942]  process_one_work+0x1d3/0x360
>>>>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>>>>> [ 4550.035948]  ? process_one_work+0x360/0x360
>>>>> [ 4550.035949]  kthread+0x115/0x140
>>>>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>>>>> [ 4550.035957]  </TASK>
>>>>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>>>>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>>>>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>>>>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>> [ 4550.057794] Call Trace:
>>>>> [ 4550.057796]  <TASK>
>>>>> [ 4550.057798]  __schedule+0x373/0x1580
>>>>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>> [ 4550.057806]  schedule+0x5b/0xe0
>>>>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>>>>> [ 4550.057810]  ? finish_wait+0x90/0x90
>>>>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>>>>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>>>>> [ 4550.057824]  ? finish_wait+0x90/0x90
>>>>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>>>>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>>>>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>>>>> [ 4550.057834]  __submit_bio+0x18c/0x220
>>>>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>>>>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>>>>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>> [ 4550.057846]  process_one_work+0x1d3/0x360
>>>>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>>>>> [ 4550.057850]  ? process_one_work+0x360/0x360
>>>>> [ 4550.057852]  kthread+0x115/0x140
>>>>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>>>>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>>>>> [ 4550.057860]  </TASK>
>>>> 
>>>> 
>>>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>>> 
>>>>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>> 
>>>>>>> Sure,
>>>>>>> 
>>>>>>> would you prefer me testing on 5.15.x or something else?
>>>>>>> 
>>>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>>>>> Kernel:
>>>>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>>>> 
>>>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>>>> kernel release first.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Kuai
>>>>>>> 
>>>>>>>>> See the kernel config attached.
>>>>>>> 
>>>>>>> 
>>>>>>> Liebe Grüße,
>>>>>>> Christian Theune
>>>>>>> 
>>>>>>> -- 
>>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>>> 
>>>>>>> 
>>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>>>> 
>> 
>>> Liebe Grüße,
>>> Christian Theune
>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
>> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-19 21:05                 ` John Stoffel
@ 2024-08-24 16:56                   ` tihmstar
  2024-08-24 18:12                   ` Dragan Milivojević
  1 sibling, 0 replies; 88+ messages in thread
From: tihmstar @ 2024-08-24 16:56 UTC (permalink / raw)
  To: John Stoffel
  Cc: Christian Theune, Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

Hi John,

i did try the bitmap patch. I installed "linux-git" with yay, while having the bitmap patch in /etc/linux-git/patches.
This was some 6.11 kernel (like from 2 days ago).
With that i restarted the sync, but it failed less than 24 hours later. 
This time there were no hanged processes like before. I can't tell what exactly the reason of the failure was.
Could be some issue on my destination NAS, but might aswell be some issue on the source NAS.
Since i noticed that two of my HDDs didn't look very healthy (one had 423 errors and the other 70 errors), despite them just being freshly installed (refurbished disks),
i decided to downgrade back to a stable kernel and replace those two drives first.
Afterwards i will re-attempt the sync with newest (stable) kernel again without the patch and when that fails i'll retry again with the patch.

> Interesting.  Why this way?  It would seem you now have to enter N
> passwords on bootup, instead of just one.  

I have rather unique setup, but the baseline is that i have full disk encryption on my rootfs (NVME), which then has a keyfile which is used for all the HDDs in the raid.
The reason that i have luks below the raid is that i actually want to use btrfs raid (and have used it before), but the raid-stripe-tree feature that i'm looking forward to just isn't usable yet.
On my old NAS i did use the "classic" btrfs raid, but well now i ended up with data corruption, who would have thought.

I still think that bitmap is the issue, but i will report back after more tests.
I think the deadlock might have been triggered by my faulty drive. Whoever is familiar with that codebase, i think it's worth checking for unexpected behavior when a drive fails.
Also i have noticed that there is currently heavy work being done on exactly that base.
Does that have any impact on the issue i'm experiencing?

Cheers
tihmstar

> Am 19.08.2024 um 23:05 schrieb John Stoffel <john@stoffel.org>:
> 
>>>>>> "tihmstar" == tihmstar  <tihmstar@gmail.com> writes:
> 
>> Hi,
> 
>> i think i have the same problem (looks very similar at least).  I
>> updated my kernel yesterday, but before that (a few minor versions
>> earlier) i'm pretty sure i've seen a very similar stacktrace with
>> "md_bitmap_startwrite".
> 
> This almost smells like an MD RAID5/6 bitmap problem, have you tried
> the patch that was posted in the thread to turn off bitmaps?  
> 
>> The setup seems to be similar, with some slight differences.
> 
>> I'm also running a linux RAID6 with LUKS, however i'm running it on SATA HDD, not NVME.
> 
> Shouldn't be a difference, except in terms of speed I would think.
> 
>> Also i have the LUKS layer on each hdd individually, then the RAID6
> is build ontop of the /dev/ma pper/-devices of those hdds.
> 
> Interesting.  Why this way?  It would seem you now have to enter N
> passwords on bootup, instead of just one.  
> 
>> I.e. i have LUKS below the RAID, while Christian has it on top of the RAID.
> 
>> Directly on the raid i have btrfs (as "single disk", raid is handled
>> by linux-raid).
> 
> Goot, btrfs RAID6 is known to have problems in a big way. 
> 
>> The "trigger" is similar too, i have one large NAS with 100TB data, which i'm trying to migrate to my new NAS.
>> "rclone copy --links --local-zero-size-links -P --transfers=16 --sftp-ask-password ....."
> 
>> After running that for ~30hours (10GB network link), IO on the new NAS is completely stuck.
> 
>> I attached a few files with info about my setup, which i think might be useful.
>> I'm happy to help debugging, but i too have data on the new NAS, so rebuilding the RAID isn't an option for me either.
> 
>> Cheers
>> tihmstar
> 
>> PS: second try sending this (never used mailing lists before)
> 
>> [root@coldnas ~]# dmesg 
>> [56065.390298] md/raid:md127: read error corrected (8 sectors at 8952913016 on dm-1)
>> [56068.949917] ata25.00: exception Emask 0x0 SAct 0x79ffffe1 SErr 0x0 action 0x0
>> [56068.957061] ata25.00: irq_stat 0x40000008
>> [56068.961099] ata25.00: failed command: READ FPDMA QUEUED
>> [56068.966335] ata25.00: cmd 60/40:d8:00:9d:a5/05:00:15:02:00/40 tag 27 ncq dma 688128 in
>>                        res 43/40:40:c0:9d:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56068.982442] ata25.00: status: { DRDY SENSE ERR }
>> [56068.987078] ata25.00: error: { UNC }
>> [56069.083162] ata25.00: configured for UDMA/133
>> [56069.083228] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56069.083232] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
>> [56069.083234] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
>> [56069.083236] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d 00 00 00 05 40 00 00
>> [56069.083238] critical medium error, dev sdc, sector 8953109760 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56069.093669] ata25: EH complete
>> [56071.267547] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56071.274689] ata25.00: irq_stat 0x40000008
>> [56071.278714] ata25.00: failed command: READ FPDMA QUEUED
>> [56071.283952] ata25.00: cmd 60/c0:80:48:aa:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
>>                        res 43/40:c0:b0:ab:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56071.300082] ata25.00: status: { DRDY SENSE ERR }
>> [56071.304712] ata25.00: error: { UNC }
>> [56071.390681] ata25.00: configured for UDMA/133
>> [56071.390724] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56071.390726] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
>> [56071.390728] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
>> [56071.390729] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 aa 48 00 00 02 c0 00 00
>> [56071.390731] critical medium error, dev sdc, sector 8953113160 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
>> [56071.401093] ata25: EH complete
>> [56073.426604] ata25.00: exception Emask 0x0 SAct 0x1000 SErr 0x0 action 0x0
>> [56073.433402] ata25.00: irq_stat 0x40000008
>> [56073.437466] ata25.00: failed command: READ FPDMA QUEUED
>> [56073.442727] ata25.00: cmd 60/08:60:a8:b9:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
>>                        res 43/40:08:a8:b9:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56073.458694] ata25.00: status: { DRDY SENSE ERR }
>> [56073.463326] ata25.00: error: { UNC }
>> [56073.564919] ata25.00: configured for UDMA/133
>> [56073.564929] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56073.564933] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
>> [56073.564935] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
>> [56073.564937] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 b9 a8 00 00 00 08 00 00
>> [56073.564939] critical medium error, dev sdc, sector 8953117096 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56073.575101] ata25: EH complete
>> [56074.052895] md/raid:md127: read error corrected (8 sectors at 8953082280 on dm-1)
>> [56077.233232] ata25.00: exception Emask 0x0 SAct 0x83f7ff02 SErr 0x0 action 0x0
>> [56077.240378] ata25.00: irq_stat 0x40000008
>> [56077.244411] ata25.00: failed command: READ FPDMA QUEUED
>> [56077.249645] ata25.00: cmd 60/40:88:00:d5:a5/05:00:15:02:00/40 tag 17 ncq dma 688128 in
>>                        res 43/40:40:40:d5:a5/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56077.265768] ata25.00: status: { DRDY SENSE ERR }
>> [56077.270401] ata25.00: error: { UNC }
>> [56077.371947] ata25.00: configured for UDMA/133
>> [56077.371989] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56077.371992] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
>> [56077.371994] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
>> [56077.371995] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 00 00 00 05 40 00 00
>> [56077.371997] critical medium error, dev sdc, sector 8953124096 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56077.382450] ata25: EH complete
>> [56079.490942] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56079.498082] ata25.00: irq_stat 0x40000008
>> [56079.502109] ata25.00: failed command: READ FPDMA QUEUED
>> [56079.507345] ata25.00: cmd 60/c0:80:40:e2:a5/02:00:15:02:00/40 tag 16 ncq dma 360448 in
>>                        res 43/40:c0:40:e3:a5/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56079.523453] ata25.00: status: { DRDY SENSE ERR }
>> [56079.528103] ata25.00: error: { UNC }
>> [56079.637795] ata25.00: configured for UDMA/133
>> [56079.637949] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56079.637952] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
>> [56079.637955] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
>> [56079.637957] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 a5 e2 40 00 00 02 c0 00 00
>> [56079.637959] critical medium error, dev sdc, sector 8953127488 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
>> [56079.648339] ata25: EH complete
>> [56084.636191] ata25.00: exception Emask 0x0 SAct 0x3fde1800 SErr 0x0 action 0x0
>> [56084.643341] ata25.00: irq_stat 0x40000008
>> [56084.647380] ata25.00: failed command: READ FPDMA QUEUED
>> [56084.652610] ata25.00: cmd 60/08:88:c8:ab:a5/00:00:15:02:00/40 tag 17 ncq dma 4096 in
>>                        res 43/40:08:c8:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56084.668561] ata25.00: status: { DRDY SENSE ERR }
>> [56084.673189] ata25.00: error: { UNC }
>> [56084.760994] ata25.00: configured for UDMA/133
>> [56084.761009] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56084.761012] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
>> [56084.761013] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
>> [56084.761015] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab c8 00 00 00 08 00 00
>> [56084.761016] critical medium error, dev sdc, sector 8953113544 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56084.771196] ata25: EH complete
>> [56087.035282] ata25.00: exception Emask 0x0 SAct 0x1ffc SErr 0x0 action 0x0
>> [56087.042074] ata25.00: irq_stat 0x40000008
>> [56087.046100] ata25.00: failed command: READ FPDMA QUEUED
>> [56087.051341] ata25.00: cmd 60/08:10:d0:ab:a5/00:00:15:02:00/40 tag 2 ncq dma 4096 in
>>                        res 43/40:08:d0:ab:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56087.067187] ata25.00: status: { DRDY SENSE ERR }
>> [56087.071879] ata25.00: error: { UNC }
>> [56087.168480] ata25.00: configured for UDMA/133
>> [56087.168495] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
>> [56087.168498] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
>> [56087.168501] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
>> [56087.168503] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 15 a5 ab d0 00 00 00 08 00 00
>> [56087.168504] critical medium error, dev sdc, sector 8953113552 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56087.178680] ata25: EH complete
>> [56087.435439] md/raid:md127: read error corrected (8 sectors at 8953078736 on dm-1)
>> [56087.435437] md/raid:md127: read error corrected (8 sectors at 8953078728 on dm-1)
>> [56090.450888] ata25.00: exception Emask 0x0 SAct 0x701f9ff SErr 0x0 action 0x0
>> [56090.457937] ata25.00: irq_stat 0x40000008
>> [56090.461967] ata25.00: failed command: READ FPDMA QUEUED
>> [56090.467243] ata25.00: cmd 60/08:00:40:e3:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
>>                        res 43/40:08:40:e3:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56090.483104] ata25.00: status: { DRDY SENSE ERR }
>> [56090.487736] ata25.00: error: { UNC }
>> [56090.708972] ata25.00: configured for UDMA/133
>> [56090.708989] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56090.708992] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
>> [56090.708994] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
>> [56090.708997] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 e3 40 00 00 00 08 00 00
>> [56090.708998] critical medium error, dev sdc, sector 8953127744 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56090.719219] ata25: EH complete
>> [56094.801196] ata25.00: exception Emask 0x0 SAct 0x3ff0 SErr 0x0 action 0x0
>> [56094.808023] ata25.00: irq_stat 0x40000008
>> [56094.812070] ata25.00: failed command: READ FPDMA QUEUED
>> [56094.817310] ata25.00: cmd 60/08:20:40:d5:a5/00:00:15:02:00/40 tag 4 ncq dma 4096 in
>>                        res 43/40:08:40:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56094.833164] ata25.00: status: { DRDY SENSE ERR }
>> [56094.837814] ata25.00: error: { UNC }
>> [56094.940782] ata25.00: configured for UDMA/133
>> [56094.940794] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56094.940797] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
>> [56094.940798] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
>> [56094.940800] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 40 00 00 00 08 00 00
>> [56094.940801] critical medium error, dev sdc, sector 8953124160 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56094.950988] ata25: EH complete
>> [56097.106784] ata25.00: exception Emask 0x0 SAct 0x1ff SErr 0x0 action 0x0
>> [56097.113493] ata25.00: irq_stat 0x40000008
>> [56097.117535] ata25.00: failed command: READ FPDMA QUEUED
>> [56097.122766] ata25.00: cmd 60/08:00:48:d5:a5/00:00:15:02:00/40 tag 0 ncq dma 4096 in
>>                        res 43/40:08:48:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56097.138641] ata25.00: status: { DRDY SENSE ERR }
>> [56097.143266] ata25.00: error: { UNC }
>> [56097.240013] ata25.00: configured for UDMA/133
>> [56097.240023] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56097.240025] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
>> [56097.240027] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
>> [56097.240029] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 48 00 00 00 08 00 00
>> [56097.240030] critical medium error, dev sdc, sector 8953124168 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56097.250218] ata25: EH complete
>> [56097.410333] md/raid:md127: read error corrected (8 sectors at 8953089344 on dm-1)
>> [56097.410336] md/raid:md127: read error corrected (8 sectors at 8953089352 on dm-1)
>> [56099.619970] ata25.00: exception Emask 0x0 SAct 0xfc400 SErr 0x0 action 0x0
>> [56099.626852] ata25.00: irq_stat 0x40000008
>> [56099.630893] ata25.00: failed command: READ FPDMA QUEUED
>> [56099.636124] ata25.00: cmd 60/08:70:60:d5:a5/00:00:15:02:00/40 tag 14 ncq dma 4096 in
>>                        res 43/40:08:60:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56099.652076] ata25.00: status: { DRDY SENSE ERR }
>> [56099.656714] ata25.00: error: { UNC }
>> [56100.347242] ata25.00: configured for UDMA/133
>> [56100.347261] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
>> [56100.347265] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
>> [56100.347267] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
>> [56100.347269] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 60 00 00 00 08 00 00
>> [56100.347271] critical medium error, dev sdc, sector 8953124192 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56100.357447] ata25: EH complete
>> [56102.646580] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56102.653723] ata25.00: irq_stat 0x40000008
>> [56102.657799] ata25.00: failed command: READ FPDMA QUEUED
>> [56102.663032] ata25.00: cmd 60/08:08:68:d5:a5/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>>                        res 43/40:08:68:d5:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56102.678903] ata25.00: status: { DRDY SENSE ERR }
>> [56102.683529] ata25.00: error: { UNC }
>> [56102.771380] ata25.00: configured for UDMA/133
>> [56102.771406] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
>> [56102.771410] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
>> [56102.771412] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
>> [56102.771414] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 a5 d5 68 00 00 00 08 00 00
>> [56102.771416] critical medium error, dev sdc, sector 8953124200 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56102.781675] ata25: EH complete
>> [56103.029621] md/raid:md127: read error corrected (8 sectors at 8953092928 on dm-1)
>> [56103.092184] md/raid:md127: read error corrected (8 sectors at 8953089376 on dm-1)
>> [56103.092186] md/raid:md127: read error corrected (8 sectors at 8953089384 on dm-1)
>> [56105.587003] ata25.00: exception Emask 0x0 SAct 0xbf048f84 SErr 0x0 action 0x0
>> [56105.594143] ata25.00: irq_stat 0x40000008
>> [56105.598252] ata25.00: failed command: READ FPDMA QUEUED
>> [56105.603519] ata25.00: cmd 60/08:38:c0:9d:a5/00:00:15:02:00/40 tag 7 ncq dma 4096 in
>>                        res 43/40:08:c0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56105.619401] ata25.00: status: { DRDY SENSE ERR }
>> [56105.624038] ata25.00: error: { UNC }
>> [56105.720389] ata25.00: configured for UDMA/133
>> [56105.720406] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56105.720408] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
>> [56105.720410] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
>> [56105.720411] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d c0 00 00 00 08 00 00
>> [56105.720412] critical medium error, dev sdc, sector 8953109952 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56105.730613] ata25: EH complete
>> [56107.886332] ata25.00: exception Emask 0x0 SAct 0x801f0fe SErr 0x0 action 0x0
>> [56107.893397] ata25.00: irq_stat 0x40000008
>> [56107.897450] ata25.00: failed command: READ FPDMA QUEUED
>> [56107.902682] ata25.00: cmd 60/08:60:d0:9d:a5/00:00:15:02:00/40 tag 12 ncq dma 4096 in
>>                        res 43/40:08:d0:9d:a5/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56107.918625] ata25.00: status: { DRDY SENSE ERR }
>> [56107.923257] ata25.00: error: { UNC }
>> [56108.027884] ata25.00: configured for UDMA/133
>> [56108.027924] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56108.027928] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
>> [56108.027930] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
>> [56108.027932] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 15 a5 9d d0 00 00 00 08 00 00
>> [56108.027933] critical medium error, dev sdc, sector 8953109968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56108.038104] ata25: EH complete
>> [56109.020217] md/raid:md127: read error corrected (8 sectors at 8953075136 on dm-1)
>> [56109.070237] md/raid:md127: read error corrected (8 sectors at 8953075152 on dm-1)
>> [56128.637868] ata25.00: exception Emask 0x0 SAct 0x1f08400 SErr 0x0 action 0x0
>> [56128.644996] ata25.00: irq_stat 0x40000008
>> [56128.649038] ata25.00: failed command: READ FPDMA QUEUED
>> [56128.654277] ata25.00: cmd 60/40:50:08:b5:bf/05:00:15:02:00/40 tag 10 ncq dma 688128 in
>>                        res 43/40:40:50:b9:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56128.670394] ata25.00: status: { DRDY SENSE ERR }
>> [56128.675024] ata25.00: error: { UNC }
>> [56128.778932] ata25.00: configured for UDMA/133
>> [56128.778969] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56128.778973] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
>> [56128.778975] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
>> [56128.778978] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 15 bf b5 08 00 00 05 40 00 00
>> [56128.778980] critical medium error, dev sdc, sector 8954819848 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56128.789429] ata25: EH complete
>> [56140.808042] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56140.815183] ata25.00: irq_stat 0x40000008
>> [56140.819207] ata25.00: failed command: READ FPDMA QUEUED
>> [56140.824447] ata25.00: cmd 60/c0:58:50:fa:c1/02:00:15:02:00/40 tag 11 ncq dma 360448 in
>>                        res 43/40:c0:88:fa:c1/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56140.840704] ata25.00: status: { DRDY SENSE ERR }
>> [56140.845334] ata25.00: error: { UNC }
>> [56140.933033] ata25.00: configured for UDMA/133
>> [56140.933158] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56140.933161] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
>> [56140.933163] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
>> [56140.933166] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 50 00 00 02 c0 00 00
>> [56140.933168] critical medium error, dev sdc, sector 8954968656 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
>> [56140.943561] ata25: EH complete
>> [56151.197018] ata25.00: exception Emask 0x0 SAct 0x90a00708 SErr 0x0 action 0x0
>> [56151.204161] ata25.00: irq_stat 0x40000008
>> [56151.208245] ata25.00: failed command: READ FPDMA QUEUED
>> [56151.213520] ata25.00: cmd 60/08:b8:50:b9:bf/00:00:15:02:00/40 tag 23 ncq dma 4096 in
>>                        res 43/40:08:50:b9:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56151.229462] ata25.00: status: { DRDY SENSE ERR }
>> [56151.234143] ata25.00: error: { UNC }
>> [56151.346056] ata25.00: configured for UDMA/133
>> [56151.346077] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56151.346081] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
>> [56151.346083] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
>> [56151.346085] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 15 bf b9 50 00 00 00 08 00 00
>> [56151.346087] critical medium error, dev sdc, sector 8954820944 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56151.356272] ata25: EH complete
>> [56151.993105] md/raid:md127: read error corrected (8 sectors at 8954786128 on dm-1)
>> [56160.208102] ata25.00: exception Emask 0x0 SAct 0x6000 SErr 0x0 action 0x0
>> [56160.214893] ata25.00: irq_stat 0x40000008
>> [56160.218993] ata25.00: failed command: READ FPDMA QUEUED
>> [56160.224233] ata25.00: cmd 60/30:68:00:dd:bf/09:00:15:02:00/40 tag 13 ncq dma 1204224 in
>>                        res 43/40:30:c8:e2:bf/00:09:15:02:00/00 Emask 0x408 (media error) <F>
>> [56160.240498] ata25.00: status: { DRDY SENSE ERR }
>> [56160.245133] ata25.00: error: { UNC }
>> [56160.342920] ata25.00: configured for UDMA/133
>> [56160.342941] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
>> [56160.342944] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
>> [56160.342946] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
>> [56160.342949] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf dd 00 00 00 09 30 00 00
>> [56160.342950] critical medium error, dev sdc, sector 8954830080 op 0x0:(READ) flags 0x84700 phys_seg 138 prio class 0
>> [56160.353550] ata25: EH complete
>> [56162.898662] ata25.00: exception Emask 0x0 SAct 0xff033fff SErr 0x0 action 0x0
>> [56162.905798] ata25.00: irq_stat 0x40000008
>> [56162.909825] ata25.00: failed command: READ FPDMA QUEUED
>> [56162.915066] ata25.00: cmd 60/08:c0:88:fa:c1/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>>                        res 43/40:08:88:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56162.931020] ata25.00: status: { DRDY SENSE ERR }
>> [56162.935724] ata25.00: error: { UNC }
>> [56163.025343] ata25.00: configured for UDMA/133
>> [56163.025377] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56163.025380] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
>> [56163.025381] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
>> [56163.025383] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa 88 00 00 00 08 00 00
>> [56163.025384] critical medium error, dev sdc, sector 8954968712 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56163.035579] ata25: EH complete
>> [56165.407934] ata25.00: exception Emask 0x0 SAct 0x3f00301e SErr 0x0 action 0x0
>> [56165.415071] ata25.00: irq_stat 0x40000008
>> [56165.419099] ata25.00: failed command: READ FPDMA QUEUED
>> [56165.424332] ata25.00: cmd 60/08:08:a0:fa:c1/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>>                        res 43/40:08:a0:fa:c1/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56165.440240] ata25.00: status: { DRDY SENSE ERR }
>> [56165.444882] ata25.00: error: { UNC }
>> [56165.541134] ata25.00: configured for UDMA/133
>> [56165.541174] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56165.541178] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
>> [56165.541180] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
>> [56165.541182] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 c1 fa a0 00 00 00 08 00 00
>> [56165.541183] critical medium error, dev sdc, sector 8954968736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56165.551397] ata25: EH complete
>> [56165.908338] md/raid:md127: read error corrected (8 sectors at 8954933896 on dm-1)
>> [56165.908399] md/raid:md127: read error corrected (8 sectors at 8954933920 on dm-1)
>> [56168.776163] ata25.00: exception Emask 0x0 SAct 0x3e00 SErr 0x0 action 0x0
>> [56168.782972] ata25.00: irq_stat 0x40000008
>> [56168.787088] ata25.00: failed command: READ FPDMA QUEUED
>> [56168.792331] ata25.00: cmd 60/68:48:00:15:c2/05:00:15:02:00/40 tag 9 ncq dma 708608 in
>>                        res 43/40:68:38:16:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56168.808360] ata25.00: status: { DRDY SENSE ERR }
>> [56168.812995] ata25.00: error: { UNC }
>> [56168.914982] ata25.00: configured for UDMA/133
>> [56168.915022] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56168.915026] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
>> [56168.915028] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
>> [56168.915030] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 c2 15 00 00 00 05 68 00 00
>> [56168.915032] critical medium error, dev sdc, sector 8954975488 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56168.925513] ata25: EH complete
>> [56171.028271] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56171.035404] ata25.00: irq_stat 0x40000008
>> [56171.039425] ata25.00: failed command: READ FPDMA QUEUED
>> [56171.044663] ata25.00: cmd 60/88:b0:68:22:c2/05:00:15:02:00/40 tag 22 ncq dma 724992 in
>>                        res 43/40:88:40:24:c2/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56171.060767] ata25.00: status: { DRDY SENSE ERR }
>> [56171.065396] ata25.00: error: { UNC }
>> [56171.197504] ata25.00: configured for UDMA/133
>> [56171.197561] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56171.197563] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
>> [56171.197565] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
>> [56171.197567] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 22 68 00 00 05 88 00 00
>> [56171.197568] critical medium error, dev sdc, sector 8954978920 op 0x0:(READ) flags 0x84700 phys_seg 89 prio class 0
>> [56171.207924] ata25: EH complete
>> [56173.646130] ata25.00: exception Emask 0x0 SAct 0x1f830fc SErr 0x0 action 0x0
>> [56173.653181] ata25.00: irq_stat 0x40000008
>> [56173.657206] ata25.00: failed command: READ FPDMA QUEUED
>> [56173.662452] ata25.00: cmd 60/08:98:c8:e2:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
>>                        res 43/40:08:c8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56173.678392] ata25.00: status: { DRDY SENSE ERR }
>> [56173.683024] ata25.00: error: { UNC }
>> [56173.788276] ata25.00: configured for UDMA/133
>> [56173.788298] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56173.788301] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
>> [56173.788302] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
>> [56173.788304] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 c8 00 00 00 08 00 00
>> [56173.788305] critical medium error, dev sdc, sector 8954831560 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56173.798484] ata25: EH complete
>> [56176.164900] ata25.00: exception Emask 0x0 SAct 0x7f003c40 SErr 0x0 action 0x0
>> [56176.172042] ata25.00: irq_stat 0x40000008
>> [56176.176075] ata25.00: failed command: READ FPDMA QUEUED
>> [56176.181310] ata25.00: cmd 60/08:c0:d8:e2:bf/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>>                        res 43/40:08:d8:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56176.197248] ata25.00: status: { DRDY SENSE ERR }
>> [56176.201885] ata25.00: error: { UNC }
>> [56176.304061] ata25.00: configured for UDMA/133
>> [56176.304083] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56176.304086] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
>> [56176.304088] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
>> [56176.304091] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 d8 00 00 00 08 00 00
>> [56176.304092] critical medium error, dev sdc, sector 8954831576 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56176.314265] ata25: EH complete
>> [56178.758401] ata25.00: exception Emask 0x0 SAct 0xf0000 SErr 0x0 action 0x0
>> [56178.765286] ata25.00: irq_stat 0x40000008
>> [56178.769308] ata25.00: failed command: READ FPDMA QUEUED
>> [56178.774543] ata25.00: cmd 60/08:80:f0:e2:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
>>                        res 43/40:08:f0:e2:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56178.790490] ata25.00: status: { DRDY SENSE ERR }
>> [56178.795122] ata25.00: error: { UNC }
>> [56178.903147] ata25.00: configured for UDMA/133
>> [56178.903159] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
>> [56178.903162] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
>> [56178.903164] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
>> [56178.903166] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf e2 f0 00 00 00 08 00 00
>> [56178.903168] critical medium error, dev sdc, sector 8954831600 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56178.913335] ata25: EH complete
>> [56179.970979] md/raid:md127: read error corrected (8 sectors at 8954796744 on dm-1)
>> [56179.995987] md/raid:md127: read error corrected (8 sectors at 8954796760 on dm-1)
>> [56180.127216] md/raid:md127: read error corrected (8 sectors at 8954796784 on dm-1)
>> [56182.417776] ata25.00: exception Emask 0x0 SAct 0x82a33a10 SErr 0x0 action 0x0
>> [56182.424916] ata25.00: irq_stat 0x40000008
>> [56182.428940] ata25.00: failed command: READ FPDMA QUEUED
>> [56182.434181] ata25.00: cmd 60/00:68:00:f0:bf/05:00:15:02:00/40 tag 13 ncq dma 655360 in
>>                        res 43/40:00:a8:f0:bf/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56182.450294] ata25.00: status: { DRDY SENSE ERR }
>> [56182.454927] ata25.00: error: { UNC }
>> [56182.860118] ata25.00: configured for UDMA/133
>> [56182.860160] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56182.860162] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
>> [56182.860164] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
>> [56182.860166] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 00 00 00 05 00 00 00
>> [56182.860167] critical medium error, dev sdc, sector 8954834944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
>> [56182.870623] ata25: EH complete
>> [56187.499909] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56187.507047] ata25.00: irq_stat 0x40000008
>> [56187.511096] ata25.00: failed command: READ FPDMA QUEUED
>> [56187.516328] ata25.00: cmd 60/c0:38:40:0a:c0/02:00:15:02:00/40 tag 7 ncq dma 360448 in
>>                        res 43/40:c0:90:0c:c0/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56187.532352] ata25.00: status: { DRDY SENSE ERR }
>> [56187.536977] ata25.00: error: { UNC }
>> [56187.683438] ata25.00: configured for UDMA/133
>> [56187.683565] sd 24:0:0:0: [sdc] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
>> [56187.683568] sd 24:0:0:0: [sdc] tag#7 Sense Key : Medium Error [current] 
>> [56187.683571] sd 24:0:0:0: [sdc] tag#7 Add. Sense: Unrecovered read error
>> [56187.683573] sd 24:0:0:0: [sdc] tag#7 CDB: Read(16) 88 00 00 00 00 02 15 c0 0a 40 00 00 02 c0 00 00
>> [56187.683574] critical medium error, dev sdc, sector 8954841664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
>> [56187.693985] ata25: EH complete
>> [56190.600849] ata25.00: exception Emask 0x0 SAct 0x3c00000 SErr 0x0 action 0x0
>> [56190.607902] ata25.00: irq_stat 0x40000008
>> [56190.611931] ata25.00: failed command: READ FPDMA QUEUED
>> [56190.617167] ata25.00: cmd 60/08:b0:40:24:c2/00:00:15:02:00/40 tag 22 ncq dma 4096 in
>>                        res 43/40:08:40:24:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56190.633103] ata25.00: status: { DRDY SENSE ERR }
>> [56190.637732] ata25.00: error: { UNC }
>> [56190.749035] ata25.00: configured for UDMA/133
>> [56190.749045] sd 24:0:0:0: [sdc] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56190.749047] sd 24:0:0:0: [sdc] tag#22 Sense Key : Medium Error [current] 
>> [56190.749049] sd 24:0:0:0: [sdc] tag#22 Add. Sense: Unrecovered read error
>> [56190.749051] sd 24:0:0:0: [sdc] tag#22 CDB: Read(16) 88 00 00 00 00 02 15 c2 24 40 00 00 00 08 00 00
>> [56190.749052] critical medium error, dev sdc, sector 8954979392 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56190.759221] ata25: EH complete
>> [56192.116233] md/raid:md127: read error corrected (8 sectors at 8954944576 on dm-1)
>> [56194.231404] ata25.00: exception Emask 0x0 SAct 0xffe00083 SErr 0x0 action 0x0
>> [56194.238545] ata25.00: irq_stat 0x40000008
>> [56194.242576] ata25.00: failed command: READ FPDMA QUEUED
>> [56194.247816] ata25.00: cmd 60/08:a8:90:0c:c0/00:00:15:02:00/40 tag 21 ncq dma 4096 in
>>                        res 43/40:08:90:0c:c0/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56194.263754] ata25.00: status: { DRDY SENSE ERR }
>> [56194.268384] ata25.00: error: { UNC }
>> [56194.372726] ata25.00: configured for UDMA/133
>> [56194.372748] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56194.372752] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
>> [56194.372754] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
>> [56194.372756] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 15 c0 0c 90 00 00 00 08 00 00
>> [56194.372758] critical medium error, dev sdc, sector 8954842256 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56194.382955] ata25: EH complete
>> [56196.397118] ata25.00: exception Emask 0x0 SAct 0xffcf3fff SErr 0x0 action 0x0
>> [56196.404260] ata25.00: irq_stat 0x40000008
>> [56196.408299] ata25.00: failed command: READ FPDMA QUEUED
>> [56196.413528] ata25.00: cmd 60/08:80:a8:f0:bf/00:00:15:02:00/40 tag 16 ncq dma 4096 in
>>                        res 43/40:08:a8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56196.429462] ata25.00: status: { DRDY SENSE ERR }
>> [56196.434090] ata25.00: error: { UNC }
>> [56196.530294] ata25.00: configured for UDMA/133
>> [56196.530341] sd 24:0:0:0: [sdc] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56196.530345] sd 24:0:0:0: [sdc] tag#16 Sense Key : Medium Error [current] 
>> [56196.530347] sd 24:0:0:0: [sdc] tag#16 Add. Sense: Unrecovered read error
>> [56196.530349] sd 24:0:0:0: [sdc] tag#16 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 a8 00 00 00 08 00 00
>> [56196.530351] critical medium error, dev sdc, sector 8954835112 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56196.540554] ata25: EH complete
>> [56198.681800] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56198.688940] ata25.00: irq_stat 0x40000008
>> [56198.692969] ata25.00: failed command: READ FPDMA QUEUED
>> [56198.698203] ata25.00: cmd 60/08:48:b0:f0:bf/00:00:15:02:00/40 tag 9 ncq dma 4096 in
>>                        res 43/40:08:b0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56198.714053] ata25.00: status: { DRDY SENSE ERR }
>> [56198.718683] ata25.00: error: { UNC }
>> [56198.804495] ata25.00: configured for UDMA/133
>> [56198.804544] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56198.804547] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
>> [56198.804548] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
>> [56198.804550] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b0 00 00 00 08 00 00
>> [56198.804551] critical medium error, dev sdc, sector 8954835120 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56198.814780] ata25: EH complete
>> [56201.562060] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56201.569195] ata25.00: irq_stat 0x40000008
>> [56201.573222] ata25.00: failed command: READ FPDMA QUEUED
>> [56201.578453] ata25.00: cmd 60/08:98:b8:f0:bf/00:00:15:02:00/40 tag 19 ncq dma 4096 in
>>                        res 43/40:08:b8:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56201.594397] ata25.00: status: { DRDY SENSE ERR }
>> [56201.599028] ata25.00: error: { UNC }
>> [56201.686862] ata25.00: configured for UDMA/133
>> [56201.686971] sd 24:0:0:0: [sdc] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=7s
>> [56201.686975] sd 24:0:0:0: [sdc] tag#19 Sense Key : Medium Error [current] 
>> [56201.686977] sd 24:0:0:0: [sdc] tag#19 Add. Sense: Unrecovered read error
>> [56201.686980] sd 24:0:0:0: [sdc] tag#19 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 b8 00 00 00 08 00 00
>> [56201.686981] critical medium error, dev sdc, sector 8954835128 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56201.697185] ata25: EH complete
>> [56204.153189] ata25.00: exception Emask 0x0 SAct 0x7eb9f001 SErr 0x0 action 0x0
>> [56204.160325] ata25.00: irq_stat 0x40000008
>> [56204.164349] ata25.00: failed command: READ FPDMA QUEUED
>> [56204.169575] ata25.00: cmd 60/08:70:d0:f0:bf/00:00:15:02:00/40 tag 14 ncq dma 4096 in
>>                        res 43/40:08:d0:f0:bf/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56204.185512] ata25.00: status: { DRDY SENSE ERR }
>> [56204.190140] ata25.00: error: { UNC }
>> [56204.285936] ata25.00: configured for UDMA/133
>> [56204.285962] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
>> [56204.285966] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
>> [56204.285968] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Unrecovered read error
>> [56204.285969] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 15 bf f0 d0 00 00 00 08 00 00
>> [56204.285970] critical medium error, dev sdc, sector 8954835152 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56204.296157] ata25: EH complete
>> [56204.675513] md/raid:md127: read error corrected (8 sectors at 8954800296 on dm-1)
>> [56204.675539] md/raid:md127: read error corrected (8 sectors at 8954807440 on dm-1)
>> [56205.744062] md/raid:md127: read error corrected (8 sectors at 8954800304 on dm-1)
>> [56205.744063] md/raid:md127: read error corrected (8 sectors at 8954800312 on dm-1)
>> [56205.762220] md/raid:md127: read error corrected (8 sectors at 8954800336 on dm-1)
>> [56208.131813] ata25.00: exception Emask 0x0 SAct 0x3ff87c SErr 0x0 action 0x0
>> [56208.138777] ata25.00: irq_stat 0x40000008
>> [56208.142810] ata25.00: failed command: READ FPDMA QUEUED
>> [56208.148045] ata25.00: cmd 60/08:58:38:16:c2/00:00:15:02:00/40 tag 11 ncq dma 4096 in
>>                        res 43/40:08:38:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56208.163988] ata25.00: status: { DRDY SENSE ERR }
>> [56208.168617] ata25.00: error: { UNC }
>> [56208.251256] ata25.00: configured for UDMA/133
>> [56208.251278] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56208.251281] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
>> [56208.251283] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Unrecovered read error
>> [56208.251286] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 38 00 00 00 08 00 00
>> [56208.251287] critical medium error, dev sdc, sector 8954975800 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56208.261478] ata25: EH complete
>> [56210.458900] ata25.00: exception Emask 0x0 SAct 0x57c0154b SErr 0x0 action 0x0
>> [56210.466042] ata25.00: irq_stat 0x40000008
>> [56210.470074] ata25.00: failed command: READ FPDMA QUEUED
>> [56210.475307] ata25.00: cmd 60/08:e0:48:16:c2/00:00:15:02:00/40 tag 28 ncq dma 4096 in
>>                        res 43/40:08:48:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56210.491246] ata25.00: status: { DRDY SENSE ERR }
>> [56210.495877] ata25.00: error: { UNC }
>> [56210.592096] ata25.00: configured for UDMA/133
>> [56210.592116] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56210.592118] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
>> [56210.592120] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
>> [56210.592122] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 48 00 00 00 08 00 00
>> [56210.592123] critical medium error, dev sdc, sector 8954975816 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56210.602295] ata25: EH complete
>> [56212.738574] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56212.745711] ata25.00: irq_stat 0x40000008
>> [56212.749733] ata25.00: failed command: READ FPDMA QUEUED
>> [56212.754964] ata25.00: cmd 60/08:68:60:16:c2/00:00:15:02:00/40 tag 13 ncq dma 4096 in
>>                        res 43/40:08:60:16:c2/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56212.770902] ata25.00: status: { DRDY SENSE ERR }
>> [56212.775529] ata25.00: error: { UNC }
>> [56212.857966] ata25.00: configured for UDMA/133
>> [56212.858023] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
>> [56212.858026] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
>> [56212.858028] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
>> [56212.858031] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 c2 16 60 00 00 00 08 00 00
>> [56212.858032] critical medium error, dev sdc, sector 8954975840 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56212.868220] ata25: EH complete
>> [56222.496422] ata25.00: exception Emask 0x0 SAct 0x60000000 SErr 0x0 action 0x0
>> [56222.503555] ata25.00: irq_stat 0x40000008
>> [56222.507581] ata25.00: failed command: READ FPDMA QUEUED
>> [56222.512806] ata25.00: cmd 60/00:e8:00:25:c0/08:00:15:02:00/40 tag 29 ncq dma 1048576 in
>>                        res 43/40:00:50:28:c0/00:08:15:02:00/00 Emask 0x408 (media error) <F>
>> [56222.529006] ata25.00: status: { DRDY SENSE ERR }
>> [56222.533632] ata25.00: error: { UNC }
>> [56222.671224] ata25.00: configured for UDMA/133
>> [56222.671234] sd 24:0:0:0: [sdc] tag#29 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
>> [56222.671237] sd 24:0:0:0: [sdc] tag#29 Sense Key : Aborted Command [current] 
>> [56222.671239] sd 24:0:0:0: [sdc] tag#29 <<vendor>>ASC=0x80 ASCQ=0x2 
>> [56222.671241] sd 24:0:0:0: [sdc] tag#29 CDB: Read(16) 88 00 00 00 00 02 15 c0 25 00 00 00 08 00 00 00
>> [56222.671242] I/O error, dev sdc, sector 8954848512 op 0x0:(READ) flags 0x84700 phys_seg 30 prio class 0
>> [56222.680538] ata25: EH complete
>> [56222.688487] md/raid:md127: read error corrected (8 sectors at 8954940984 on dm-1)
>> [56222.688713] md/raid:md127: read error corrected (8 sectors at 8954941000 on dm-1)
>> [56222.694091] md/raid:md127: read error corrected (8 sectors at 8954941024 on dm-1)
>> [56268.461816] ata25.00: exception Emask 0x0 SAct 0x3c015f SErr 0x0 action 0x0
>> [56268.468783] ata25.00: irq_stat 0x40000008
>> [56268.472807] ata25.00: failed command: READ FPDMA QUEUED
>> [56268.478039] ata25.00: cmd 60/68:00:98:08:f9/07:00:15:02:00/40 tag 0 ncq dma 970752 in
>>                        res 43/40:68:f0:0c:f9/00:07:15:02:00/00 Emask 0x408 (media error) <F>
>> [56268.494065] ata25.00: status: { DRDY SENSE ERR }
>> [56268.498696] ata25.00: error: { UNC }
>> [56268.613464] ata25.00: configured for UDMA/133
>> [56268.613500] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
>> [56268.613503] sd 24:0:0:0: [sdc] tag#0 Sense Key : Aborted Command [current] 
>> [56268.613505] sd 24:0:0:0: [sdc] tag#0 <<vendor>>ASC=0x80 ASCQ=0x2 
>> [56268.613507] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 15 f9 08 98 00 00 07 68 00 00
>> [56268.613508] I/O error, dev sdc, sector 8958576792 op 0x0:(READ) flags 0x80700 phys_seg 112 prio class 0
>> [56268.622926] ata25: EH complete
>> [56274.436128] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56274.443268] ata25.00: irq_stat 0x40000008
>> [56274.447298] ata25.00: failed command: READ FPDMA QUEUED
>> [56274.452542] ata25.00: cmd 60/c0:f8:58:82:fb/02:00:15:02:00/40 tag 31 ncq dma 360448 in
>>                        res 43/40:c0:c0:84:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56274.468653] ata25.00: status: { DRDY SENSE ERR }
>> [56274.473431] ata25.00: error: { UNC }
>> [56274.569728] ata25.00: configured for UDMA/133
>> [56274.569846] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
>> [56274.569849] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
>> [56274.569851] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
>> [56274.569853] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 15 fb 82 58 00 00 02 c0 00 00
>> [56274.569854] critical medium error, dev sdc, sector 8958739032 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
>> [56274.580225] ata25: EH complete
>> [56277.816375] ata25.00: exception Emask 0x0 SAct 0x7c00003f SErr 0x0 action 0x0
>> [56277.823514] ata25.00: irq_stat 0x40000008
>> [56277.827538] ata25.00: failed command: READ FPDMA QUEUED
>> [56277.832776] ata25.00: cmd 60/08:d0:c0:84:fb/00:00:15:02:00/40 tag 26 ncq dma 4096 in
>>                        res 43/40:08:c0:84:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56277.848709] ata25.00: status: { DRDY SENSE ERR }
>> [56277.853339] ata25.00: error: { UNC }
>> [56277.943562] ata25.00: configured for UDMA/133
>> [56277.943583] sd 24:0:0:0: [sdc] tag#26 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56277.943586] sd 24:0:0:0: [sdc] tag#26 Sense Key : Medium Error [current] 
>> [56277.943588] sd 24:0:0:0: [sdc] tag#26 Add. Sense: Unrecovered read error
>> [56277.943590] sd 24:0:0:0: [sdc] tag#26 CDB: Read(16) 88 00 00 00 00 02 15 fb 84 c0 00 00 00 08 00 00
>> [56277.943591] critical medium error, dev sdc, sector 8958739648 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56277.953781] ata25: EH complete
>> [56278.035244] md/raid:md127: read error corrected (8 sectors at 8958704832 on dm-1)
>> [56282.414840] ata25.00: exception Emask 0x0 SAct 0x1f00 SErr 0x0 action 0x0
>> [56282.421632] ata25.00: irq_stat 0x40000008
>> [56282.425664] ata25.00: failed command: READ FPDMA QUEUED
>> [56282.430892] ata25.00: cmd 60/40:40:00:9d:fb/05:00:15:02:00/40 tag 8 ncq dma 688128 in
>>                        res 43/40:40:70:a0:fb/00:05:15:02:00/00 Emask 0x408 (media error) <F>
>> [56282.446907] ata25.00: status: { DRDY SENSE ERR }
>> [56282.451530] ata25.00: error: { UNC }
>> [56282.550283] ata25.00: configured for UDMA/133
>> [56282.550306] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56282.550309] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
>> [56282.550311] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Unrecovered read error
>> [56282.550313] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 15 fb 9d 00 00 00 05 40 00 00
>> [56282.550314] critical medium error, dev sdc, sector 8958745856 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56282.560764] ata25: EH complete
>> [56284.651765] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56284.658903] ata25.00: irq_stat 0x40000008
>> [56284.662929] ata25.00: failed command: READ FPDMA QUEUED
>> [56284.668166] ata25.00: cmd 60/f8:68:08:ad:fb/02:00:15:02:00/40 tag 13 ncq dma 389120 in
>>                        res 43/40:f8:50:ae:fb/00:02:15:02:00/00 Emask 0x408 (media error) <F>
>> [56284.684304] ata25.00: status: { DRDY SENSE ERR }
>> [56284.688940] ata25.00: error: { UNC }
>> [56284.782803] ata25.00: configured for UDMA/133
>> [56284.782879] sd 24:0:0:0: [sdc] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56284.782882] sd 24:0:0:0: [sdc] tag#13 Sense Key : Medium Error [current] 
>> [56284.782884] sd 24:0:0:0: [sdc] tag#13 Add. Sense: Unrecovered read error
>> [56284.782886] sd 24:0:0:0: [sdc] tag#13 CDB: Read(16) 88 00 00 00 00 02 15 fb ad 08 00 00 02 f8 00 00
>> [56284.782887] critical medium error, dev sdc, sector 8958749960 op 0x0:(READ) flags 0x80700 phys_seg 94 prio class 0
>> [56284.793258] ata25: EH complete
>> [56291.737028] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56291.744159] ata25.00: irq_stat 0x40000008
>> [56291.748188] ata25.00: failed command: READ FPDMA QUEUED
>> [56291.753427] ata25.00: cmd 60/08:c8:50:ae:fb/00:00:15:02:00/40 tag 25 ncq dma 4096 in
>>                        res 43/40:08:50:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56291.769386] ata25.00: status: { DRDY SENSE ERR }
>> [56291.774088] ata25.00: error: { UNC }
>> [56291.880394] ata25.00: configured for UDMA/133
>> [56291.880466] sd 24:0:0:0: [sdc] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56291.880470] sd 24:0:0:0: [sdc] tag#25 Sense Key : Medium Error [current] 
>> [56291.880472] sd 24:0:0:0: [sdc] tag#25 Add. Sense: Unrecovered read error
>> [56291.880475] sd 24:0:0:0: [sdc] tag#25 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 50 00 00 00 08 00 00
>> [56291.880476] critical medium error, dev sdc, sector 8958750288 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56291.890675] ata25: EH complete
>> [56294.678468] ata25.00: exception Emask 0x0 SAct 0x3800800 SErr 0x0 action 0x0
>> [56294.685525] ata25.00: irq_stat 0x40000008
>> [56294.689551] ata25.00: failed command: READ FPDMA QUEUED
>> [56294.694778] ata25.00: cmd 60/08:c0:60:ae:fb/00:00:15:02:00/40 tag 24 ncq dma 4096 in
>>                        res 43/40:08:60:ae:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56294.710714] ata25.00: status: { DRDY SENSE ERR }
>> [56294.715342] ata25.00: error: { UNC }
>> [56294.804414] ata25.00: configured for UDMA/133
>> [56294.804432] sd 24:0:0:0: [sdc] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
>> [56294.804436] sd 24:0:0:0: [sdc] tag#24 Sense Key : Medium Error [current] 
>> [56294.804438] sd 24:0:0:0: [sdc] tag#24 Add. Sense: Unrecovered read error
>> [56294.804440] sd 24:0:0:0: [sdc] tag#24 CDB: Read(16) 88 00 00 00 00 02 15 fb ae 60 00 00 00 08 00 00
>> [56294.804442] critical medium error, dev sdc, sector 8958750304 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56294.814622] ata25: EH complete
>> [56294.823246] md/raid:md127: read error corrected (8 sectors at 8958715472 on dm-1)
>> [56295.533704] md/raid:md127: read error corrected (8 sectors at 8958715488 on dm-1)
>> [56297.739116] ata25.00: exception Emask 0x0 SAct 0x811807fe SErr 0x0 action 0x0
>> [56297.746250] ata25.00: irq_stat 0x40000008
>> [56297.750274] ata25.00: failed command: READ FPDMA QUEUED
>> [56297.755507] ata25.00: cmd 60/08:08:70:a0:fb/00:00:15:02:00/40 tag 1 ncq dma 4096 in
>>                        res 43/40:08:70:a0:fb/00:00:15:02:00/00 Emask 0x408 (media error) <F>
>> [56297.771358] ata25.00: status: { DRDY SENSE ERR }
>> [56297.775986] ata25.00: error: { UNC }
>> [56297.861641] ata25.00: configured for UDMA/133
>> [56297.861658] sd 24:0:0:0: [sdc] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56297.861661] sd 24:0:0:0: [sdc] tag#1 Sense Key : Medium Error [current] 
>> [56297.861664] sd 24:0:0:0: [sdc] tag#1 Add. Sense: Unrecovered read error
>> [56297.861666] sd 24:0:0:0: [sdc] tag#1 CDB: Read(16) 88 00 00 00 00 02 15 fb a0 70 00 00 00 08 00 00
>> [56297.861668] critical medium error, dev sdc, sector 8958746736 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56297.871852] ata25: EH complete
>> [56298.209332] md/raid:md127: read error corrected (8 sectors at 8958711920 on dm-1)
>> [56305.006612] ata25.00: exception Emask 0x0 SAct 0x3f28f00 SErr 0x0 action 0x0
>> [56305.013664] ata25.00: irq_stat 0x40000008
>> [56305.017691] ata25.00: failed command: READ FPDMA QUEUED
>> [56305.022937] ata25.00: cmd 60/00:78:00:90:15/05:00:16:02:00/40 tag 15 ncq dma 655360 in
>>                        res 43/40:00:40:93:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
>> [56305.039048] ata25.00: status: { DRDY SENSE ERR }
>> [56305.043677] ata25.00: error: { UNC }
>> [56305.168764] ata25.00: configured for UDMA/133
>> [56305.168820] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56305.168824] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
>> [56305.168826] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
>> [56305.168828] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 15 90 00 00 00 05 00 00 00
>> [56305.168830] critical medium error, dev sdc, sector 8960446464 op 0x0:(READ) flags 0x80700 phys_seg 159 prio class 0
>> [56305.179276] ata25: EH complete
>> [56307.266729] ata25.00: exception Emask 0x0 SAct 0xec SErr 0x0 action 0x0
>> [56307.273348] ata25.00: irq_stat 0x40000008
>> [56307.277379] ata25.00: failed command: READ FPDMA QUEUED
>> [56307.282614] ata25.00: cmd 60/40:10:00:9d:15/05:00:16:02:00/40 tag 2 ncq dma 688128 in
>>                        res 43/40:40:20:a1:15/00:05:16:02:00/00 Emask 0x408 (media error) <F>
>> [56307.298638] ata25.00: status: { DRDY SENSE ERR }
>> [56307.303270] ata25.00: error: { UNC }
>> [56307.399985] ata25.00: configured for UDMA/133
>> [56307.400014] sd 24:0:0:0: [sdc] tag#2 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56307.400017] sd 24:0:0:0: [sdc] tag#2 Sense Key : Medium Error [current] 
>> [56307.400019] sd 24:0:0:0: [sdc] tag#2 Add. Sense: Unrecovered read error
>> [56307.400021] sd 24:0:0:0: [sdc] tag#2 CDB: Read(16) 88 00 00 00 00 02 16 15 9d 00 00 00 05 40 00 00
>> [56307.400023] critical medium error, dev sdc, sector 8960449792 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56307.410470] ata25: EH complete
>> [56310.075585] ata25.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
>> [56310.082119] ata25.00: irq_stat 0x40000008
>> [56310.086146] ata25.00: failed command: READ FPDMA QUEUED
>> [56310.091383] ata25.00: cmd 60/08:00:40:93:15/00:00:16:02:00/40 tag 0 ncq dma 4096 in
>>                        res 43/40:08:40:93:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56310.107240] ata25.00: status: { DRDY SENSE ERR }
>> [56310.111869] ata25.00: error: { UNC }
>> [56310.249012] ata25.00: configured for UDMA/133
>> [56310.249025] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56310.249028] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
>> [56310.249031] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
>> [56310.249033] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 15 93 40 00 00 00 08 00 00
>> [56310.249035] critical medium error, dev sdc, sector 8960447296 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56310.259210] ata25: EH complete
>> [56312.894324] md/raid:md127: read error corrected (8 sectors at 8960412480 on dm-1)
>> [56315.319228] ata25.00: exception Emask 0x0 SAct 0xe0000 SErr 0x0 action 0x0
>> [56315.326108] ata25.00: irq_stat 0x40000008
>> [56315.330136] ata25.00: failed command: READ FPDMA QUEUED
>> [56315.335376] ata25.00: cmd 60/08:88:20:a1:15/00:00:16:02:00/40 tag 17 ncq dma 4096 in
>>                        res 43/40:08:20:a1:15/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56315.351313] ata25.00: status: { DRDY SENSE ERR }
>> [56315.355946] ata25.00: error: { UNC }
>> [56315.488843] ata25.00: configured for UDMA/133
>> [56315.488853] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56315.488856] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
>> [56315.488857] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
>> [56315.488859] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 15 a1 20 00 00 00 08 00 00
>> [56315.488861] critical medium error, dev sdc, sector 8960450848 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56315.499030] ata25: EH complete
>> [56316.119276] md/raid:md127: read error corrected (8 sectors at 8960416032 on dm-1)
>> [56324.775934] ata25.00: exception Emask 0x0 SAct 0x7000000f SErr 0x0 action 0x0
>> [56324.783070] ata25.00: irq_stat 0x40000008
>> [56324.787095] ata25.00: failed command: READ FPDMA QUEUED
>> [56324.792337] ata25.00: cmd 60/c0:e0:68:ca:34/02:00:16:02:00/40 tag 28 ncq dma 360448 in
>>                        res 43/40:c0:88:cb:34/00:02:16:02:00/00 Emask 0x408 (media error) <F>
>> [56324.808453] ata25.00: status: { DRDY SENSE ERR }
>> [56324.813084] ata25.00: error: { UNC }
>> [56324.935531] ata25.00: configured for UDMA/133
>> [56324.935573] sd 24:0:0:0: [sdc] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=5s
>> [56324.935577] sd 24:0:0:0: [sdc] tag#28 Sense Key : Medium Error [current] 
>> [56324.935579] sd 24:0:0:0: [sdc] tag#28 Add. Sense: Unrecovered read error
>> [56324.935582] sd 24:0:0:0: [sdc] tag#28 CDB: Read(16) 88 00 00 00 00 02 16 34 ca 68 00 00 02 c0 00 00
>> [56324.935583] critical medium error, dev sdc, sector 8962493032 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
>> [56324.945933] ata25: EH complete
>> [56327.568023] ata25.00: exception Emask 0x0 SAct 0x7f8001ff SErr 0x0 action 0x0
>> [56327.575156] ata25.00: irq_stat 0x40000008
>> [56327.579185] ata25.00: failed command: READ FPDMA QUEUED
>> [56327.584415] ata25.00: cmd 60/08:b8:88:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
>>                        res 43/40:08:88:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56327.600346] ata25.00: status: { DRDY SENSE ERR }
>> [56327.604973] ata25.00: error: { UNC }
>> [56327.692932] ata25.00: configured for UDMA/133
>> [56327.692968] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56327.692971] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
>> [56327.692973] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
>> [56327.692976] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 88 00 00 00 08 00 00
>> [56327.692978] critical medium error, dev sdc, sector 8962493320 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56327.703153] ata25: EH complete
>> [56330.009432] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56330.016578] ata25.00: irq_stat 0x40000008
>> [56330.020604] ata25.00: failed command: READ FPDMA QUEUED
>> [56330.025845] ata25.00: cmd 60/08:b8:90:cb:34/00:00:16:02:00/40 tag 23 ncq dma 4096 in
>>                        res 43/40:08:90:cb:34/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56330.041782] ata25.00: status: { DRDY SENSE ERR }
>> [56330.046407] ata25.00: error: { UNC }
>> [56330.142059] ata25.00: configured for UDMA/133
>> [56330.142130] sd 24:0:0:0: [sdc] tag#23 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56330.142133] sd 24:0:0:0: [sdc] tag#23 Sense Key : Medium Error [current] 
>> [56330.142135] sd 24:0:0:0: [sdc] tag#23 Add. Sense: Unrecovered read error
>> [56330.142136] sd 24:0:0:0: [sdc] tag#23 CDB: Read(16) 88 00 00 00 00 02 16 34 cb 90 00 00 00 08 00 00
>> [56330.142138] critical medium error, dev sdc, sector 8962493328 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56330.152328] ata25: EH complete
>> [56330.473245] md/raid:md127: read error corrected (8 sectors at 8962458504 on dm-1)
>> [56330.475305] md/raid:md127: read error corrected (8 sectors at 8962458512 on dm-1)
>> [56381.913875] ata25.00: exception Emask 0x0 SAct 0x8 SErr 0x0 action 0x0
>> [56381.920405] ata25.00: irq_stat 0x40000008
>> [56381.924429] ata25.00: failed command: READ FPDMA QUEUED
>> [56381.929664] ata25.00: cmd 60/00:18:00:d0:6b/05:00:16:02:00/40 tag 3 ncq dma 655360 in
>>                        res 43/40:00:18:d0:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
>> [56381.945685] ata25.00: status: { DRDY SENSE ERR }
>> [56381.950313] ata25.00: error: { UNC }
>> [56382.040583] ata25.00: configured for UDMA/133
>> [56382.040598] sd 24:0:0:0: [sdc] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56382.040601] sd 24:0:0:0: [sdc] tag#3 Sense Key : Medium Error [current] 
>> [56382.040603] sd 24:0:0:0: [sdc] tag#3 Add. Sense: Unrecovered read error
>> [56382.040605] sd 24:0:0:0: [sdc] tag#3 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 00 00 00 05 00 00 00
>> [56382.040606] critical medium error, dev sdc, sector 8966098944 op 0x0:(READ) flags 0x80700 phys_seg 160 prio class 0
>> [56382.051045] ata25: EH complete
>> [56384.109723] ata25.00: exception Emask 0x0 SAct 0x7fe00000 SErr 0x0 action 0x0
>> [56384.116861] ata25.00: irq_stat 0x40000008
>> [56384.120885] ata25.00: failed command: READ FPDMA QUEUED
>> [56384.126117] ata25.00: cmd 60/40:a8:00:dd:6b/05:00:16:02:00/40 tag 21 ncq dma 688128 in
>>                        res 43/40:40:f8:dd:6b/00:05:16:02:00/00 Emask 0x408 (media error) <F>
>> [56384.142231] ata25.00: status: { DRDY SENSE ERR }
>> [56384.146862] ata25.00: error: { UNC }
>> [56384.264767] ata25.00: configured for UDMA/133
>> [56384.264806] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56384.264809] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
>> [56384.264810] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Unrecovered read error
>> [56384.264812] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 16 6b dd 00 00 00 05 40 00 00
>> [56384.264813] critical medium error, dev sdc, sector 8966102272 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [56384.275275] ata25: EH complete
>> [56386.346397] ata25.00: exception Emask 0x0 SAct 0x7f7d0 SErr 0x0 action 0x0
>> [56386.353275] ata25.00: irq_stat 0x40000008
>> [56386.357302] ata25.00: failed command: READ FPDMA QUEUED
>> [56386.362537] ata25.00: cmd 60/c0:30:40:ea:6b/02:00:16:02:00/40 tag 6 ncq dma 360448 in
>>                        res 43/40:c0:d8:eb:6b/00:02:16:02:00/00 Emask 0x408 (media error) <F>
>> [56386.378556] ata25.00: status: { DRDY SENSE ERR }
>> [56386.383185] ata25.00: error: { UNC }
>> [56386.472369] ata25.00: configured for UDMA/133
>> [56386.472430] sd 24:0:0:0: [sdc] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56386.472433] sd 24:0:0:0: [sdc] tag#6 Sense Key : Medium Error [current] 
>> [56386.472435] sd 24:0:0:0: [sdc] tag#6 Add. Sense: Unrecovered read error
>> [56386.472438] sd 24:0:0:0: [sdc] tag#6 CDB: Read(16) 88 00 00 00 00 02 16 6b ea 40 00 00 02 c0 00 00
>> [56386.472439] critical medium error, dev sdc, sector 8966105664 op 0x0:(READ) flags 0x84700 phys_seg 88 prio class 0
>> [56386.482798] ata25: EH complete
>> [56388.841034] ata25.00: exception Emask 0x0 SAct 0x80000041 SErr 0x0 action 0x0
>> [56388.848169] ata25.00: irq_stat 0x40000008
>> [56388.852195] ata25.00: failed command: READ FPDMA QUEUED
>> [56388.857423] ata25.00: cmd 60/08:f8:18:d0:6b/00:00:16:02:00/40 tag 31 ncq dma 4096 in
>>                        res 43/40:08:18:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56388.873364] ata25.00: status: { DRDY SENSE ERR }
>> [56388.877996] ata25.00: error: { UNC }
>> [56388.988119] ata25.00: configured for UDMA/133
>> [56388.988136] sd 24:0:0:0: [sdc] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56388.988139] sd 24:0:0:0: [sdc] tag#31 Sense Key : Medium Error [current] 
>> [56388.988141] sd 24:0:0:0: [sdc] tag#31 Add. Sense: Unrecovered read error
>> [56388.988142] sd 24:0:0:0: [sdc] tag#31 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 18 00 00 00 08 00 00
>> [56388.988143] critical medium error, dev sdc, sector 8966098968 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56388.998324] ata25: EH complete
>> [56391.522804] ata25.00: exception Emask 0x0 SAct 0x1e9c179c SErr 0x0 action 0x0
>> [56391.529945] ata25.00: irq_stat 0x40000008
>> [56391.533976] ata25.00: failed command: READ FPDMA QUEUED
>> [56391.539212] ata25.00: cmd 60/08:d8:20:d0:6b/00:00:16:02:00/40 tag 27 ncq dma 4096 in
>>                        res 43/40:08:20:d0:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56391.555146] ata25.00: status: { DRDY SENSE ERR }
>> [56391.559778] ata25.00: error: { UNC }
>> [56391.695556] ata25.00: configured for UDMA/133
>> [56391.695634] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56391.695637] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
>> [56391.695639] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
>> [56391.695642] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6b d0 20 00 00 00 08 00 00
>> [56391.695643] critical medium error, dev sdc, sector 8966098976 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56391.705828] ata25: EH complete
>> [56393.848207] ata25.00: exception Emask 0x0 SAct 0xf9fff8bf SErr 0x0 action 0x0
>> [56393.855346] ata25.00: irq_stat 0x40000008
>> [56393.859386] ata25.00: failed command: READ FPDMA QUEUED
>> [56393.864624] ata25.00: cmd 60/c8:d8:48:3a:6e/02:00:16:02:00/40 tag 27 ncq dma 364544 in
>>                        res 43/40:c8:08:3b:6e/00:02:16:02:00/00 Emask 0x408 (media error) <F>
>> [56393.880732] ata25.00: status: { DRDY SENSE ERR }
>> [56393.885357] ata25.00: error: { UNC }
>> [56393.986399] ata25.00: configured for UDMA/133
>> [56393.986540] sd 24:0:0:0: [sdc] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56393.986544] sd 24:0:0:0: [sdc] tag#27 Sense Key : Medium Error [current] 
>> [56393.986546] sd 24:0:0:0: [sdc] tag#27 Add. Sense: Unrecovered read error
>> [56393.986549] sd 24:0:0:0: [sdc] tag#27 CDB: Read(16) 88 00 00 00 00 02 16 6e 3a 48 00 00 02 c8 00 00
>> [56393.986550] critical medium error, dev sdc, sector 8966257224 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 0
>> [56393.996900] ata25: EH complete
>> [56396.083820] md/raid:md127: read error corrected (8 sectors at 8966064160 on dm-1)
>> [56396.083837] md/raid:md127: read error corrected (8 sectors at 8966064152 on dm-1)
>> [56400.565346] ata25.00: exception Emask 0x0 SAct 0xfdfbf7df SErr 0x0 action 0x0
>> [56400.572485] ata25.00: irq_stat 0x40000008
>> [56400.576527] ata25.00: failed command: READ FPDMA QUEUED
>> [56400.581758] ata25.00: cmd 60/08:60:08:3b:6e/00:00:16:02:00/40 tag 12 ncq dma 4096 in
>>                        res 43/40:08:08:3b:6e/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56400.597695] ata25.00: status: { DRDY SENSE ERR }
>> [56400.602329] ata25.00: error: { UNC }
>> [56400.734083] ata25.00: configured for UDMA/133
>> [56400.734136] sd 24:0:0:0: [sdc] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56400.734139] sd 24:0:0:0: [sdc] tag#12 Sense Key : Medium Error [current] 
>> [56400.734142] sd 24:0:0:0: [sdc] tag#12 Add. Sense: Unrecovered read error
>> [56400.734144] sd 24:0:0:0: [sdc] tag#12 CDB: Read(16) 88 00 00 00 00 02 16 6e 3b 08 00 00 00 08 00 00
>> [56400.734146] critical medium error, dev sdc, sector 8966257416 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56400.744333] ata25: EH complete
>> [56401.189218] md/raid:md127: read error corrected (8 sectors at 8966222600 on dm-1)
>> [56403.685654] ata25.00: exception Emask 0x0 SAct 0x701ffff0 SErr 0x0 action 0x0
>> [56403.692794] ata25.00: irq_stat 0x40000008
>> [56403.696816] ata25.00: failed command: READ FPDMA QUEUED
>> [56403.702049] ata25.00: cmd 60/08:20:d8:eb:6b/00:00:16:02:00/40 tag 4 ncq dma 4096 in
>>                        res 43/40:08:d8:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56403.717891] ata25.00: status: { DRDY SENSE ERR }
>> [56403.722515] ata25.00: error: { UNC }
>> [56403.833001] ata25.00: configured for UDMA/133
>> [56403.833024] sd 24:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56403.833029] sd 24:0:0:0: [sdc] tag#4 Sense Key : Medium Error [current] 
>> [56403.833032] sd 24:0:0:0: [sdc] tag#4 Add. Sense: Unrecovered read error
>> [56403.833034] sd 24:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 16 6b eb d8 00 00 00 08 00 00
>> [56403.833036] critical medium error, dev sdc, sector 8966106072 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56403.843247] ata25: EH complete
>> [56406.373840] ata25.00: exception Emask 0x0 SAct 0x7f81fe6b SErr 0x0 action 0x0
>> [56406.380972] ata25.00: irq_stat 0x40000008
>> [56406.385004] ata25.00: failed command: READ FPDMA QUEUED
>> [56406.390247] ata25.00: cmd 60/08:48:e0:eb:6b/00:00:16:02:00/40 tag 9 ncq dma 4096 in
>>                        res 43/40:08:e0:eb:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56406.406107] ata25.00: status: { DRDY SENSE ERR }
>> [56406.410737] ata25.00: error: { UNC }
>> [56406.507071] ata25.00: configured for UDMA/133
>> [56406.507094] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56406.507096] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
>> [56406.507098] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
>> [56406.507100] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6b eb e0 00 00 00 08 00 00
>> [56406.507101] critical medium error, dev sdc, sector 8966106080 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56406.517282] ata25: EH complete
>> [56408.531429] ata25.00: exception Emask 0x0 SAct 0xbfff87ff SErr 0x0 action 0x0
>> [56408.538690] ata25.00: irq_stat 0x40000008
>> [56408.542722] ata25.00: failed command: READ FPDMA QUEUED
>> [56408.547952] ata25.00: cmd 60/08:78:f8:dd:6b/00:00:16:02:00/40 tag 15 ncq dma 4096 in
>>                        res 43/40:08:f8:dd:6b/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56408.563920] ata25.00: status: { DRDY SENSE ERR }
>> [56408.568559] ata25.00: error: { UNC }
>> [56408.664647] ata25.00: configured for UDMA/133
>> [56408.664684] sd 24:0:0:0: [sdc] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56408.664687] sd 24:0:0:0: [sdc] tag#15 Sense Key : Medium Error [current] 
>> [56408.664689] sd 24:0:0:0: [sdc] tag#15 Add. Sense: Unrecovered read error
>> [56408.664690] sd 24:0:0:0: [sdc] tag#15 CDB: Read(16) 88 00 00 00 00 02 16 6b dd f8 00 00 00 08 00 00
>> [56408.664692] critical medium error, dev sdc, sector 8966102520 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56408.674879] ata25: EH complete
>> [56410.899004] md/raid:md127: read error corrected (8 sectors at 8966071256 on dm-1)
>> [56410.899015] md/raid:md127: read error corrected (8 sectors at 8966071264 on dm-1)
>> [56412.014769] md/raid:md127: read error corrected (8 sectors at 8966067704 on dm-1)
>> [56419.657721] ata25.00: exception Emask 0x0 SAct 0x1eff1f SErr 0x0 action 0x0
>> [56419.664684] ata25.00: irq_stat 0x40000008
>> [56419.668729] ata25.00: failed command: READ FPDMA QUEUED
>> [56419.673959] ata25.00: cmd 60/00:88:00:15:6c/08:00:16:02:00/40 tag 17 ncq dma 1048576 in
>>                        res 43/40:00:78:15:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
>> [56419.690151] ata25.00: status: { DRDY SENSE ERR }
>> [56419.694787] ata25.00: error: { UNC }
>> [56419.827430] ata25.00: configured for UDMA/133
>> [56419.827504] sd 24:0:0:0: [sdc] tag#17 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56419.827507] sd 24:0:0:0: [sdc] tag#17 Sense Key : Medium Error [current] 
>> [56419.827509] sd 24:0:0:0: [sdc] tag#17 Add. Sense: Unrecovered read error
>> [56419.827510] sd 24:0:0:0: [sdc] tag#17 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 00 00 00 08 00 00 00
>> [56419.827511] critical medium error, dev sdc, sector 8966116608 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
>> [56419.837858] ata25: EH complete
>> [56421.908274] ata25.00: exception Emask 0x0 SAct 0x70e1847f SErr 0x0 action 0x0
>> [56421.915411] ata25.00: irq_stat 0x40000008
>> [56421.919444] ata25.00: failed command: READ FPDMA QUEUED
>> [56421.924674] ata25.00: cmd 60/00:00:00:1d:6c/08:00:16:02:00/40 tag 0 ncq dma 1048576 in
>>                        res 43/40:00:50:23:6c/00:08:16:02:00/00 Emask 0x408 (media error) <F>
>> [56421.940790] ata25.00: status: { DRDY SENSE ERR }
>> [56421.945421] ata25.00: error: { UNC }
>> [56422.051628] ata25.00: configured for UDMA/133
>> [56422.051706] sd 24:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56422.051710] sd 24:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] 
>> [56422.051712] sd 24:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
>> [56422.051714] sd 24:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 02 16 6c 1d 00 00 00 08 00 00 00
>> [56422.051716] critical medium error, dev sdc, sector 8966118656 op 0x0:(READ) flags 0x84700 phys_seg 29 prio class 0
>> [56422.062114] ata25: EH complete
>> [56424.259349] ata25.00: exception Emask 0x0 SAct 0x6b1ffc08 SErr 0x0 action 0x0
>> [56424.266486] ata25.00: irq_stat 0x40000008
>> [56424.270517] ata25.00: failed command: READ FPDMA QUEUED
>> [56424.275749] ata25.00: cmd 60/08:50:78:15:6c/00:00:16:02:00/40 tag 10 ncq dma 4096 in
>>                        res 43/40:08:78:15:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56424.291690] ata25.00: status: { DRDY SENSE ERR }
>> [56424.296324] ata25.00: error: { UNC }
>> [56424.384160] ata25.00: configured for UDMA/133
>> [56424.384177] sd 24:0:0:0: [sdc] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56424.384180] sd 24:0:0:0: [sdc] tag#10 Sense Key : Medium Error [current] 
>> [56424.384182] sd 24:0:0:0: [sdc] tag#10 Add. Sense: Unrecovered read error
>> [56424.384184] sd 24:0:0:0: [sdc] tag#10 CDB: Read(16) 88 00 00 00 00 02 16 6c 15 78 00 00 00 08 00 00
>> [56424.384185] critical medium error, dev sdc, sector 8966116728 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56424.394369] ata25: EH complete
>> [56424.599523] md/raid:md127: read error corrected (8 sectors at 8966081912 on dm-1)
>> [56428.528870] ata25.00: exception Emask 0x0 SAct 0xff747eff SErr 0x0 action 0x0
>> [56428.536009] ata25.00: irq_stat 0x40000008
>> [56428.540060] ata25.00: failed command: READ FPDMA QUEUED
>> [56428.545308] ata25.00: cmd 60/08:48:50:23:6c/00:00:16:02:00/40 tag 9 ncq dma 4096 in
>>                        res 43/40:08:50:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56428.561598] ata25.00: status: { DRDY SENSE ERR }
>> [56428.566242] ata25.00: error: { UNC }
>> [56428.690982] ata25.00: configured for UDMA/133
>> [56428.691017] sd 24:0:0:0: [sdc] tag#9 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [56428.691020] sd 24:0:0:0: [sdc] tag#9 Sense Key : Medium Error [current] 
>> [56428.691022] sd 24:0:0:0: [sdc] tag#9 Add. Sense: Unrecovered read error
>> [56428.691023] sd 24:0:0:0: [sdc] tag#9 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 50 00 00 00 08 00 00
>> [56428.691025] critical medium error, dev sdc, sector 8966120272 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56428.701217] ata25: EH complete
>> [56430.789563] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [56430.796699] ata25.00: irq_stat 0x40000008
>> [56430.800724] ata25.00: failed command: READ FPDMA QUEUED
>> [56430.805977] ata25.00: cmd 60/08:a0:58:23:6c/00:00:16:02:00/40 tag 20 ncq dma 4096 in
>>                        res 43/40:08:58:23:6c/00:00:16:02:00/00 Emask 0x408 (media error) <F>
>> [56430.822085] ata25.00: status: { DRDY SENSE ERR }
>> [56430.826719] ata25.00: error: { UNC }
>> [56430.923535] ata25.00: configured for UDMA/133
>> [56430.923604] sd 24:0:0:0: [sdc] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
>> [56430.923607] sd 24:0:0:0: [sdc] tag#20 Sense Key : Medium Error [current] 
>> [56430.923608] sd 24:0:0:0: [sdc] tag#20 Add. Sense: Unrecovered read error
>> [56430.923610] sd 24:0:0:0: [sdc] tag#20 CDB: Read(16) 88 00 00 00 00 02 16 6c 23 58 00 00 00 08 00 00
>> [56430.923611] critical medium error, dev sdc, sector 8966120280 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [56430.933789] ata25: EH complete
>> [56431.464686] md/raid:md127: read error corrected (8 sectors at 8966085464 on dm-1)
>> [56431.464684] md/raid:md127: read error corrected (8 sectors at 8966085456 on dm-1)
>> [57881.912653] ata25.00: exception Emask 0x0 SAct 0x843ff807 SErr 0x0 action 0x0
>> [57881.919808] ata25.00: irq_stat 0x40000008
>> [57881.923835] ata25.00: failed command: READ FPDMA QUEUED
>> [57881.929074] ata25.00: cmd 60/48:58:00:65:4c/05:00:1d:02:00/40 tag 11 ncq dma 692224 in
>>                        res 43/40:48:68:66:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57881.945260] ata25.00: status: { DRDY SENSE ERR }
>> [57881.949953] ata25.00: error: { UNC }
>> [57882.041190] ata25.00: configured for UDMA/133
>> [57882.041253] sd 24:0:0:0: [sdc] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=3s
>> [57882.041256] sd 24:0:0:0: [sdc] tag#11 Sense Key : Medium Error [current] 
>> [57882.041258] sd 24:0:0:0: [sdc] tag#11 Add. Sense: Data synchronization mark error
>> [57882.041260] sd 24:0:0:0: [sdc] tag#11 CDB: Read(16) 88 00 00 00 00 02 1d 4c 65 00 00 00 05 48 00 00
>> [57882.041261] I/O error, dev sdc, sector 9081480448 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 0
>> [57882.050685] ata25: EH complete
>> [57885.724654] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [57885.731866] ata25.00: irq_stat 0x40000008
>> [57885.735894] ata25.00: failed command: READ FPDMA QUEUED
>> [57885.741217] ata25.00: cmd 60/08:20:68:66:4c/00:00:1d:02:00/40 tag 4 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57885.757076] ata25.00: status: { DRDY SENSE ERR }
>> [57885.761721] ata25.00: error: { UNC }
>> [57885.898142] ata25.00: configured for UDMA/133
>> [57885.898197] ata25: EH complete
>> [57888.362234] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [57888.369399] ata25.00: irq_stat 0x40000008
>> [57888.373430] ata25.00: failed command: READ FPDMA QUEUED
>> [57888.378665] ata25.00: cmd 60/08:60:68:66:4c/00:00:1d:02:00/40 tag 12 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57888.394601] ata25.00: status: { DRDY SENSE ERR }
>> [57888.399243] ata25.00: error: { UNC }
>> [57888.488931] ata25.00: configured for UDMA/133
>> [57888.488992] ata25: EH complete
>> [57891.013850] ata25.00: exception Emask 0x0 SAct 0xbffc00b6 SErr 0x0 action 0x0
>> [57891.020998] ata25.00: irq_stat 0x40000008
>> [57891.025037] ata25.00: failed command: READ FPDMA QUEUED
>> [57891.030565] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57891.046568] ata25.00: status: { DRDY SENSE ERR }
>> [57891.051257] ata25.00: error: { UNC }
>> [57891.154641] ata25.00: configured for UDMA/133
>> [57891.154667] ata25: EH complete
>> [57893.666329] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [57893.673470] ata25.00: irq_stat 0x40000008
>> [57893.677495] ata25.00: failed command: READ FPDMA QUEUED
>> [57893.682722] ata25.00: cmd 60/08:48:68:66:4c/00:00:1d:02:00/40 tag 9 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57893.698570] ata25.00: status: { DRDY SENSE ERR }
>> [57893.703198] ata25.00: error: { UNC }
>> [57893.795404] ata25.00: configured for UDMA/133
>> [57893.795458] ata25: EH complete
>> [57896.285432] ata25.00: exception Emask 0x0 SAct 0x3ffc0020 SErr 0x0 action 0x0
>> [57896.292651] ata25.00: irq_stat 0x40000008
>> [57896.296685] ata25.00: failed command: READ FPDMA QUEUED
>> [57896.301920] ata25.00: cmd 60/08:90:68:66:4c/00:00:1d:02:00/40 tag 18 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57896.317965] ata25.00: status: { DRDY SENSE ERR }
>> [57896.322722] ata25.00: error: { UNC }
>> [57896.419495] ata25.00: configured for UDMA/133
>> [57896.419516] ata25: EH complete
>> [57898.960626] ata25.00: exception Emask 0x0 SAct 0xffffffff SErr 0x0 action 0x0
>> [57898.967775] ata25.00: irq_stat 0x40000008
>> [57898.971808] ata25.00: failed command: READ FPDMA QUEUED
>> [57898.977066] ata25.00: cmd 60/08:70:68:66:4c/00:00:1d:02:00/40 tag 14 ncq dma 4096 in
>>                        res 43/40:08:68:66:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57898.993042] ata25.00: status: { DRDY SENSE ERR }
>> [57898.997677] ata25.00: error: { UNC }
>> [57899.085209] ata25.00: configured for UDMA/133
>> [57899.085258] sd 24:0:0:0: [sdc] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
>> [57899.085260] sd 24:0:0:0: [sdc] tag#14 Sense Key : Medium Error [current] 
>> [57899.085262] sd 24:0:0:0: [sdc] tag#14 Add. Sense: Data synchronization mark error
>> [57899.085264] sd 24:0:0:0: [sdc] tag#14 CDB: Read(16) 88 00 00 00 00 02 1d 4c 66 68 00 00 00 08 00 00
>> [57899.085265] I/O error, dev sdc, sector 9081480808 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [57899.094430] ata25: EH complete
>> [57900.557539] md/raid:md127: read error corrected (8 sectors at 9081445992 on dm-1)
>> [57903.266393] ata25.00: exception Emask 0x0 SAct 0x7fe0c3ff SErr 0x0 action 0x0
>> [57903.273531] ata25.00: irq_stat 0x40000008
>> [57903.277559] ata25.00: failed command: READ FPDMA QUEUED
>> [57903.282790] ata25.00: cmd 60/10:a8:00:70:4c/05:00:1d:02:00/40 tag 21 ncq dma 663552 in
>>                        res 43/40:10:40:74:4c/00:05:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57903.298899] ata25.00: status: { DRDY SENSE ERR }
>> [57903.303523] ata25.00: error: { UNC }
>> [57903.433667] ata25.00: configured for UDMA/133
>> [57903.433735] sd 24:0:0:0: [sdc] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
>> [57903.433738] sd 24:0:0:0: [sdc] tag#21 Sense Key : Medium Error [current] 
>> [57903.433740] sd 24:0:0:0: [sdc] tag#21 Add. Sense: Data synchronization mark error
>> [57903.433742] sd 24:0:0:0: [sdc] tag#21 CDB: Read(16) 88 00 00 00 00 02 1d 4c 70 00 00 00 05 10 00 00
>> [57903.433743] I/O error, dev sdc, sector 9081483264 op 0x0:(READ) flags 0x80700 phys_seg 162 prio class 0
>> [57903.443152] ata25: EH complete
>> [57906.791480] ata25.00: exception Emask 0x0 SAct 0x7f0fffff SErr 0x0 action 0x0
>> [57906.798705] ata25.00: irq_stat 0x40000008
>> [57906.802742] ata25.00: failed command: READ FPDMA QUEUED
>> [57906.807980] ata25.00: cmd 60/08:c0:40:74:4c/00:00:1d:02:00/40 tag 24 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57906.823914] ata25.00: status: { DRDY SENSE ERR }
>> [57906.828540] ata25.00: error: { UNC }
>> [57906.990783] ata25.00: configured for UDMA/133
>> [57906.990830] ata25: EH complete
>> [57909.373378] ata25.00: exception Emask 0x0 SAct 0x7800000f SErr 0x0 action 0x0
>> [57909.380521] ata25.00: irq_stat 0x40000008
>> [57909.384548] ata25.00: failed command: READ FPDMA QUEUED
>> [57909.389785] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57909.405722] ata25.00: status: { DRDY SENSE ERR }
>> [57909.410352] ata25.00: error: { UNC }
>> [57909.548237] ata25.00: configured for UDMA/133
>> [57909.548284] ata25: EH complete
>> [57911.907322] ata25.00: exception Emask 0x0 SAct 0x7003feff SErr 0x0 action 0x0
>> [57911.914465] ata25.00: irq_stat 0x40000008
>> [57911.918490] ata25.00: failed command: READ FPDMA QUEUED
>> [57911.923726] ata25.00: cmd 60/08:e8:40:74:4c/00:00:1d:02:00/40 tag 29 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57911.939665] ata25.00: status: { DRDY SENSE ERR }
>> [57911.944297] ata25.00: error: { UNC }
>> [57912.065349] ata25.00: configured for UDMA/133
>> [57912.065458] ata25: EH complete
>> [57914.457377] ata25.00: exception Emask 0x0 SAct 0xfff07fff SErr 0x0 action 0x0
>> [57914.464516] ata25.00: irq_stat 0x40000008
>> [57914.468545] ata25.00: failed command: READ FPDMA QUEUED
>> [57914.473780] ata25.00: cmd 60/08:50:40:74:4c/00:00:1d:02:00/40 tag 10 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57914.489711] ata25.00: status: { DRDY SENSE ERR }
>> [57914.494339] ata25.00: error: { UNC }
>> [57914.614467] ata25.00: configured for UDMA/133
>> [57914.614561] ata25: EH complete
>> [57917.090724] ata25.00: exception Emask 0x0 SAct 0x7ffc3fff SErr 0x0 action 0x0
>> [57917.097865] ata25.00: irq_stat 0x40000008
>> [57917.101902] ata25.00: failed command: READ FPDMA QUEUED
>> [57917.107138] ata25.00: cmd 60/08:e0:40:74:4c/00:00:1d:02:00/40 tag 28 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57917.123080] ata25.00: status: { DRDY SENSE ERR }
>> [57917.127815] ata25.00: error: { UNC }
>> [57917.246905] ata25.00: configured for UDMA/133
>> [57917.247000] ata25: EH complete
>> [57919.617397] ata25.00: exception Emask 0x0 SAct 0xfe7fc7ff SErr 0x0 action 0x0
>> [57919.624542] ata25.00: irq_stat 0x40000008
>> [57919.628570] ata25.00: failed command: READ FPDMA QUEUED
>> [57919.633802] ata25.00: cmd 60/08:40:40:74:4c/00:00:1d:02:00/40 tag 8 ncq dma 4096 in
>>                        res 43/40:08:40:74:4c/00:00:1d:02:00/00 Emask 0x408 (media error) <F>
>> [57919.649663] ata25.00: status: { DRDY SENSE ERR }
>> [57919.654296] ata25.00: error: { UNC }
>> [57919.769685] ata25.00: configured for UDMA/133
>> [57919.769797] sd 24:0:0:0: [sdc] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
>> [57919.769800] sd 24:0:0:0: [sdc] tag#8 Sense Key : Medium Error [current] 
>> [57919.769803] sd 24:0:0:0: [sdc] tag#8 Add. Sense: Data synchronization mark error
>> [57919.769805] sd 24:0:0:0: [sdc] tag#8 CDB: Read(16) 88 00 00 00 00 02 1d 4c 74 40 00 00 00 08 00 00
>> [57919.769807] I/O error, dev sdc, sector 9081484352 op 0x0:(READ) flags 0x4000 phys_seg 1 prio class 0
>> [57919.778997] ata25: EH complete
>> [57919.882467] md/raid:md127: read error corrected (8 sectors at 9081449536 on dm-1)
>> [64266.546763] INFO: task btrfs-transacti:8202 blocked for more than 122 seconds.
>> [64266.553999]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.560013] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.567852] task:btrfs-transacti state:D stack:0     pid:8202  tgid:8202  ppid:2      flags:0x00004000
>> [64266.567855] Call Trace:
>> [64266.567856]  <TASK>
>> [64266.567858]  __schedule+0x3d5/0x1520
>> [64266.567865]  schedule+0x27/0xf0
>> [64266.567867]  wait_for_commit+0x11f/0x1d0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.567898]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.567902]  btrfs_commit_transaction+0xbb6/0xc80 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.567924]  transaction_kthread+0x159/0x1c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.567943]  ? __pfx_transaction_kthread+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.567958]  kthread+0xcf/0x100
>> [64266.567960]  ? __pfx_kthread+0x10/0x10
>> [64266.567962]  ret_from_fork+0x31/0x50
>> [64266.567964]  ? __pfx_kthread+0x10/0x10
>> [64266.567966]  ret_from_fork_asm+0x1a/0x30
>> [64266.567969]  </TASK>
>> [64266.567998] INFO: task kworker/u64:57:96535 blocked for more than 122 seconds.
>> [64266.575240]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.581256] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.589077] task:kworker/u64:57  state:D stack:0     pid:96535 tgid:96535 ppid:2      flags:0x00004000
>> [64266.589080] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.589084] Call Trace:
>> [64266.589085]  <TASK>
>> [64266.589086]  __schedule+0x3d5/0x1520
>> [64266.589091]  schedule+0x27/0xf0
>> [64266.589093]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.589100]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.589102]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.589107]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.589109]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589133]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.589139]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.589146]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.589147]  __submit_bio+0x168/0x240
>> [64266.589151]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.589153]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589176]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.589179]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589198]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589218]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589235]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.589237]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589254]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589271]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589293]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589310]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.589325]  do_writepages+0x7e/0x270
>> [64266.589329]  __writeback_single_inode+0x41/0x340
>> [64266.589331]  ? wbc_detach_inode+0x116/0x240
>> [64266.589333]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.589341]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.589343]  wb_writeback+0x193/0x310
>> [64266.589347]  wb_workfn+0x2a5/0x440
>> [64266.589350]  process_one_work+0x17b/0x330
>> [64266.589352]  worker_thread+0x2e2/0x410
>> [64266.589354]  ? __pfx_worker_thread+0x10/0x10
>> [64266.589355]  kthread+0xcf/0x100
>> [64266.589357]  ? __pfx_kthread+0x10/0x10
>> [64266.589359]  ret_from_fork+0x31/0x50
>> [64266.589361]  ? __pfx_kthread+0x10/0x10
>> [64266.589362]  ret_from_fork_asm+0x1a/0x30
>> [64266.589366]  </TASK>
>> [64266.589367] INFO: task kworker/u64:80:96615 blocked for more than 122 seconds.
>> [64266.596592]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.602597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.610419] task:kworker/u64:80  state:D stack:0     pid:96615 tgid:96615 ppid:2      flags:0x00004000
>> [64266.610422] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.610424] Call Trace:
>> [64266.610425]  <TASK>
>> [64266.610426]  __schedule+0x3d5/0x1520
>> [64266.610430]  schedule+0x27/0xf0
>> [64266.610432]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.610436]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.610439]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.610443]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.610444]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610461]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.610465]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.610469]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.610471]  __submit_bio+0x168/0x240
>> [64266.610473]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.610476]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610495]  ? __extent_writepage_io+0x21f/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610511]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610529]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610545]  extent_write_cache_pages+0x397/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610565]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610581]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.610597]  do_writepages+0x7e/0x270
>> [64266.610600]  __writeback_single_inode+0x41/0x340
>> [64266.610601]  ? wbc_detach_inode+0x116/0x240
>> [64266.610604]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.610611]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.610614]  wb_writeback+0x193/0x310
>> [64266.610617]  wb_workfn+0xc4/0x440
>> [64266.610620]  process_one_work+0x17b/0x330
>> [64266.610622]  worker_thread+0x2e2/0x410
>> [64266.610624]  ? __pfx_worker_thread+0x10/0x10
>> [64266.610625]  kthread+0xcf/0x100
>> [64266.610627]  ? __pfx_kthread+0x10/0x10
>> [64266.610629]  ret_from_fork+0x31/0x50
>> [64266.610630]  ? __pfx_kthread+0x10/0x10
>> [64266.610632]  ret_from_fork_asm+0x1a/0x30
>> [64266.610635]  </TASK>
>> [64266.610636] INFO: task kworker/u64:102:96624 blocked for more than 122 seconds.
>> [64266.617953]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.623962] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.631784] task:kworker/u64:102 state:D stack:0     pid:96624 tgid:96624 ppid:2      flags:0x00004000
>> [64266.631786] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.631789] Call Trace:
>> [64266.631790]  <TASK>
>> [64266.631791]  __schedule+0x3d5/0x1520
>> [64266.631795]  schedule+0x27/0xf0
>> [64266.631797]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.631800]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.631803]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.631807]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.631808]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631824]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.631828]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.631832]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.631834]  __submit_bio+0x168/0x240
>> [64266.631836]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.631839]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631857]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.631860]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631877]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631893]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631908]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.631910]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631926]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631942]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631962]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631978]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.631993]  do_writepages+0x7e/0x270
>> [64266.631996]  __writeback_single_inode+0x41/0x340
>> [64266.631998]  ? wbc_detach_inode+0x116/0x240
>> [64266.632000]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.632007]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.632010]  wb_writeback+0x193/0x310
>> [64266.632013]  wb_workfn+0x2a5/0x440
>> [64266.632016]  process_one_work+0x17b/0x330
>> [64266.632018]  worker_thread+0x2e2/0x410
>> [64266.632020]  ? __pfx_worker_thread+0x10/0x10
>> [64266.632021]  kthread+0xcf/0x100
>> [64266.632023]  ? __pfx_kthread+0x10/0x10
>> [64266.632025]  ret_from_fork+0x31/0x50
>> [64266.632026]  ? __pfx_kthread+0x10/0x10
>> [64266.632028]  ret_from_fork_asm+0x1a/0x30
>> [64266.632031]  </TASK>
>> [64266.632032] INFO: task kworker/u64:25:100536 blocked for more than 122 seconds.
>> [64266.639341]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.645348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.653171] task:kworker/u64:25  state:D stack:0     pid:100536 tgid:100536 ppid:2      flags:0x00004000
>> [64266.653173] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.653175] Call Trace:
>> [64266.653176]  <TASK>
>> [64266.653177]  __schedule+0x3d5/0x1520
>> [64266.653181]  schedule+0x27/0xf0
>> [64266.653183]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.653187]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.653189]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.653193]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.653194]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653210]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.653214]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.653218]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.653220]  __submit_bio+0x168/0x240
>> [64266.653222]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.653225]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653244]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.653246]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653264]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653279]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653295]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.653297]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653312]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653328]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653348]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653364]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.653381]  do_writepages+0x7e/0x270
>> [64266.653383]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.653385]  ? select_task_rq_fair+0x7f8/0x1da0
>> [64266.653388]  __writeback_single_inode+0x41/0x340
>> [64266.653390]  ? wbc_detach_inode+0x116/0x240
>> [64266.653392]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.653399]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.653402]  wb_writeback+0x193/0x310
>> [64266.653405]  wb_workfn+0xc4/0x440
>> [64266.653408]  process_one_work+0x17b/0x330
>> [64266.653410]  worker_thread+0x2e2/0x410
>> [64266.653412]  ? __pfx_worker_thread+0x10/0x10
>> [64266.653413]  kthread+0xcf/0x100
>> [64266.653415]  ? __pfx_kthread+0x10/0x10
>> [64266.653417]  ret_from_fork+0x31/0x50
>> [64266.653418]  ? __pfx_kthread+0x10/0x10
>> [64266.653420]  ret_from_fork_asm+0x1a/0x30
>> [64266.653423]  </TASK>
>> [64266.653424] INFO: task kworker/u64:52:101215 blocked for more than 122 seconds.
>> [64266.660726]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.666738] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.674556] task:kworker/u64:52  state:D stack:0     pid:101215 tgid:101215 ppid:2      flags:0x00004000
>> [64266.674558] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.674560] Call Trace:
>> [64266.674561]  <TASK>
>> [64266.674562]  __schedule+0x3d5/0x1520
>> [64266.674566]  schedule+0x27/0xf0
>> [64266.674568]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.674572]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.674575]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.674579]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.674580]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674596]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.674599]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.674604]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.674605]  __submit_bio+0x168/0x240
>> [64266.674608]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.674610]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674629]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.674631]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674649]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674665]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674680]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.674682]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674697]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674713]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674733]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674748]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.674764]  do_writepages+0x7e/0x270
>> [64266.674767]  __writeback_single_inode+0x41/0x340
>> [64266.674769]  ? wbc_detach_inode+0x116/0x240
>> [64266.674771]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.674778]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.674780]  wb_writeback+0x193/0x310
>> [64266.674784]  wb_workfn+0x2a5/0x440
>> [64266.674787]  process_one_work+0x17b/0x330
>> [64266.674789]  worker_thread+0x2e2/0x410
>> [64266.674791]  ? __pfx_worker_thread+0x10/0x10
>> [64266.674792]  kthread+0xcf/0x100
>> [64266.674794]  ? __pfx_kthread+0x10/0x10
>> [64266.674795]  ret_from_fork+0x31/0x50
>> [64266.674797]  ? __pfx_kthread+0x10/0x10
>> [64266.674799]  ret_from_fork_asm+0x1a/0x30
>> [64266.674802]  </TASK>
>> [64266.674803] INFO: task kworker/u64:71:101293 blocked for more than 123 seconds.
>> [64266.682113]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.688120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.695943] task:kworker/u64:71  state:D stack:0     pid:101293 tgid:101293 ppid:2      flags:0x00004000
>> [64266.695945] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.695948] Call Trace:
>> [64266.695949]  <TASK>
>> [64266.695950]  __schedule+0x3d5/0x1520
>> [64266.695954]  schedule+0x27/0xf0
>> [64266.695956]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.695960]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.695962]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.695966]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.695968]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.695969]  ? blk_cgroup_bio_start+0x8c/0xd0
>> [64266.695973]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.695978]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.695979]  __submit_bio+0x168/0x240
>> [64266.695982]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.695985]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696003]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.696005]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696023]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696038]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696054]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.696056]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696071]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696087]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696107]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696122]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.696138]  do_writepages+0x7e/0x270
>> [64266.696141]  __writeback_single_inode+0x41/0x340
>> [64266.696143]  ? wbc_detach_inode+0x116/0x240
>> [64266.696145]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.696152]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.696155]  wb_writeback+0x193/0x310
>> [64266.696158]  wb_workfn+0x34b/0x440
>> [64266.696161]  process_one_work+0x17b/0x330
>> [64266.696163]  worker_thread+0x2e2/0x410
>> [64266.696165]  ? __pfx_worker_thread+0x10/0x10
>> [64266.696166]  kthread+0xcf/0x100
>> [64266.696168]  ? __pfx_kthread+0x10/0x10
>> [64266.696169]  ret_from_fork+0x31/0x50
>> [64266.696171]  ? __pfx_kthread+0x10/0x10
>> [64266.696173]  ret_from_fork_asm+0x1a/0x30
>> [64266.696176]  </TASK>
>> [64266.696177] INFO: task kworker/u64:83:101621 blocked for more than 123 seconds.
>> [64266.703487]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.709494] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.717308] task:kworker/u64:83  state:D stack:0     pid:101621 tgid:101621 ppid:2      flags:0x00004000
>> [64266.717310] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.717313] Call Trace:
>> [64266.717313]  <TASK>
>> [64266.717315]  __schedule+0x3d5/0x1520
>> [64266.717319]  schedule+0x27/0xf0
>> [64266.717320]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.717324]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.717327]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.717331]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.717332]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717348]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.717351]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.717356]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.717357]  __submit_bio+0x168/0x240
>> [64266.717360]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.717363]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717381]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.717383]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717401]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717416]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717432]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.717433]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717449]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717465]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717485]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717501]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.717516]  do_writepages+0x7e/0x270
>> [64266.717518]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.717519]  ? autoremove_wake_function+0x15/0x60
>> [64266.717522]  __writeback_single_inode+0x41/0x340
>> [64266.717523]  ? wbc_detach_inode+0x116/0x240
>> [64266.717526]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.717533]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.717535]  wb_writeback+0x193/0x310
>> [64266.717539]  wb_workfn+0x2a5/0x440
>> [64266.717542]  process_one_work+0x17b/0x330
>> [64266.717544]  worker_thread+0x2e2/0x410
>> [64266.717546]  ? __pfx_worker_thread+0x10/0x10
>> [64266.717547]  kthread+0xcf/0x100
>> [64266.717548]  ? __pfx_kthread+0x10/0x10
>> [64266.717550]  ret_from_fork+0x31/0x50
>> [64266.717552]  ? __pfx_kthread+0x10/0x10
>> [64266.717554]  ret_from_fork_asm+0x1a/0x30
>> [64266.717557]  </TASK>
>> [64266.717558] INFO: task kworker/u64:88:101623 blocked for more than 123 seconds.
>> [64266.724865]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.730873] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.738691] task:kworker/u64:88  state:D stack:0     pid:101623 tgid:101623 ppid:2      flags:0x00004000
>> [64266.738693] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.738695] Call Trace:
>> [64266.738696]  <TASK>
>> [64266.738697]  __schedule+0x3d5/0x1520
>> [64266.738701]  schedule+0x27/0xf0
>> [64266.738703]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.738707]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.738709]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.738714]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.738715]  ? btrfs_add_ordered_sum+0x26/0x70 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738731]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.738734]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.738739]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.738740]  __submit_bio+0x168/0x240
>> [64266.738743]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.738745]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738764]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.738766]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738788]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738804]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738819]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.738821]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738836]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738852]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738872]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738887]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.738902]  do_writepages+0x7e/0x270
>> [64266.738905]  __writeback_single_inode+0x41/0x340
>> [64266.738907]  ? wbc_detach_inode+0x116/0x240
>> [64266.738909]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.738917]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.738919]  wb_writeback+0x193/0x310
>> [64266.738922]  wb_workfn+0x2a5/0x440
>> [64266.738926]  process_one_work+0x17b/0x330
>> [64266.738928]  worker_thread+0x2e2/0x410
>> [64266.738929]  ? __pfx_worker_thread+0x10/0x10
>> [64266.738931]  kthread+0xcf/0x100
>> [64266.738932]  ? __pfx_kthread+0x10/0x10
>> [64266.738934]  ret_from_fork+0x31/0x50
>> [64266.738936]  ? __pfx_kthread+0x10/0x10
>> [64266.738937]  ret_from_fork_asm+0x1a/0x30
>> [64266.738941]  </TASK>
>> [64266.738942] INFO: task kworker/u64:108:102161 blocked for more than 123 seconds.
>> [64266.746331]       Tainted: P           OE      6.10.5-arch1-1 #1
>> [64266.752336] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [64266.760152] task:kworker/u64:108 state:D stack:0     pid:102161 tgid:102161 ppid:2      flags:0x00004000
>> [64266.760154] Workqueue: writeback wb_workfn (flush-btrfs-1)
>> [64266.760156] Call Trace:
>> [64266.760157]  <TASK>
>> [64266.760158]  __schedule+0x3d5/0x1520
>> [64266.760162]  schedule+0x27/0xf0
>> [64266.760164]  raid5_get_active_stripe+0x279/0x560 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.760168]  ? __pfx_autoremove_wake_function+0x10/0x10
>> [64266.760170]  raid5_make_request+0x20f/0x12a0 [raid456 b94de4f08587c81d0c642257de3cb756cdaec135]
>> [64266.760175]  ? __pfx_woken_wake_function+0x10/0x10
>> [64266.760176]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.760177]  ? blk_cgroup_bio_start+0x8c/0xd0
>> [64266.760181]  md_handle_request+0x154/0x270 [md_mod 5b42cb1736bc5b827320e91152fa39660da047e7]
>> [64266.760185]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.760187]  __submit_bio+0x168/0x240
>> [64266.760189]  submit_bio_noacct_nocheck+0x197/0x3e0
>> [64266.760192]  btrfs_submit_chunk+0x1a9/0x6c0 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760210]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.760213]  btrfs_submit_bio+0x1a/0x30 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760230]  submit_one_bio+0x36/0x50 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760246]  submit_extent_page+0x104/0x290 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760261]  ? folio_clear_dirty_for_io+0x121/0x190
>> [64266.760263]  __extent_writepage_io+0x1e6/0x470 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760278]  ? writepage_delalloc+0x83/0x150 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760294]  extent_write_cache_pages+0x281/0x850 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760314]  btrfs_writepages+0x89/0x130 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760330]  ? __pfx_end_bbio_data_write+0x10/0x10 [btrfs 6037d5c3ca04912456b14a5819efacac8aa8a062]
>> [64266.760345]  do_writepages+0x7e/0x270
>> [64266.760346]  ? srso_alias_return_thunk+0x5/0xfbef5
>> [64266.760349]  __writeback_single_inode+0x41/0x340
>> [64266.760351]  ? wbc_detach_inode+0x116/0x240
>> [64266.760353]  writeback_sb_inodes+0x21c/0x4f0
>> [64266.760360]  __writeback_inodes_wb+0x4c/0xf0
>> [64266.760363]  wb_writeback+0x193/0x310
>> [64266.760366]  wb_workfn+0x2a5/0x440
>> [64266.760369]  process_one_work+0x17b/0x330
>> [64266.760371]  worker_thread+0x2e2/0x410
>> [64266.760373]  ? __pfx_worker_thread+0x10/0x10
>> [64266.760374]  kthread+0xcf/0x100
>> [64266.760376]  ? __pfx_kthread+0x10/0x10
>> [64266.760378]  ret_from_fork+0x31/0x50
>> [64266.760380]  ? __pfx_kthread+0x10/0x10
>> [64266.760381]  ret_from_fork_asm+0x1a/0x30
>> [64266.760385]  </TASK>
>> [64266.760385] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
>> [85040.962654] systemd-coredump[137770]: Process 1259 (systemd-journal) of user 0 terminated abnormally with signal 6/ABRT, processing...
>> sda           8:0    0  14.6T  0 disk  
>> └─sda1        8:1    0  14.5T  0 part  
>>  └─c07     254:3    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdb           8:16   0  14.6T  0 disk  
>> └─sdb1        8:17   0  14.5T  0 part  
>>  └─c09     254:4    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdc           8:32   0  14.6T  0 disk  
>> └─sdc1        8:33   0  14.5T  0 part  
>>  └─c05     254:1    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdd           8:48   0  14.6T  0 disk  
>> └─sdd1        8:49   0  14.5T  0 part  
>>  └─c10     254:9    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sde           8:64   0  14.6T  0 disk  
>> └─sde1        8:65   0  14.5T  0 part  
>>  └─c06     254:10   0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdf           8:80   0  14.6T  0 disk  
>> └─sdf1        8:81   0  14.5T  0 part  
>>  └─c08     254:8    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdg           8:96   0  14.6T  0 disk  
>> └─sdg1        8:97   0  14.5T  0 part  
>>  └─c03     254:7    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdh           8:112  0  14.6T  0 disk  
>> └─sdh1        8:113  0  14.5T  0 part  
>>  └─c04     254:6    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdi           8:128  0  14.6T  0 disk  
>> └─sdi1        8:129  0  14.5T  0 part  
>>  └─c01     254:2    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> sdj           8:144  0  14.6T  0 disk  
>> └─sdj1        8:145  0  14.5T  0 part  
>>  └─c02     254:5    0  14.5T  0 crypt 
>>    └─md127   9:127  0 116.4T  0 raid6 /var/myhdd
>> [root@coldnas ~]# cat /proc/mdstat 
>> Personalities : [raid6] [raid5] [raid4] 
>> md127 : active raid6 dm-10[5] dm-9[9] dm-8[7] dm-7[2] dm-6[3] dm-5[1] dm-4[8] dm-3[6] dm-2[0] dm-1[4]
>>      124972269568 blocks super 1.2 level 6, 4096k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
>>      bitmap: 3/117 pages [12KB], 65536KB chunk
> 
>> unused devices: <none>
>> ps aux | grep "        D"
>> root        8202  0.2  0.0      0     0 ?        D    Aug18   4:11 [btrfs-transaction]
>> hasi        9117  0.4  0.0   4544  2628 ?        Ds   Aug18   6:33 /usr/lib/ssh/sftp-server
>> hasi        9143  0.4  0.0   4572  2756 ?        Ds   Aug18   6:37 /usr/lib/ssh/sftp-server
>> hasi        9180  1.1  0.0   4452  2796 ?        Ds   Aug18  16:56 /usr/lib/ssh/sftp-server
>> hasi       30949  0.4  0.0   4488  2556 ?        Ds   Aug18   4:55 /usr/lib/ssh/sftp-server
>> hasi       31971  0.3  0.0   4512  2628 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
>> hasi       32277  0.3  0.0   4592  2580 ?        Ds   Aug18   4:30 /usr/lib/ssh/sftp-server
>> hasi       32379  0.3  0.0   4540  2692 ?        Ds   Aug18   3:58 /usr/lib/ssh/sftp-server
>> hasi       32694  0.3  0.0   4536  2976 ?        Ds   Aug18   3:54 /usr/lib/ssh/sftp-server
>> hasi       34038  0.2  0.0   4536  2736 ?        Ds   Aug18   2:59 /usr/lib/ssh/sftp-server
>> hasi       35651  0.2  0.0   4568  2556 ?        Ds   01:18   1:56 /usr/lib/ssh/sftp-server
>> hasi       35767  0.2  0.0   4604  2624 ?        Ds   01:21   2:03 /usr/lib/ssh/sftp-server
>> hasi       35771  0.2  0.0   4464  2556 ?        Ds   01:21   2:07 /usr/lib/ssh/sftp-server
>> hasi       35816  0.2  0.0   4512  2864 ?        Ds   01:22   2:07 /usr/lib/ssh/sftp-server
>> hasi       36497  0.2  0.0   4636  2864 ?        Ds   01:37   2:01 /usr/lib/ssh/sftp-server
>> hasi       36504  0.2  0.0   4548  2692 ?        Ds   01:37   2:06 /usr/lib/ssh/sftp-server
>> hasi       36511  0.2  0.0   4540  2864 ?        Ds   01:37   1:48 /usr/lib/ssh/sftp-server
>> hasi       43900  0.1  0.0   4456  2568 ?        Ds   02:47   1:36 /usr/lib/ssh/sftp-server
>> hasi       55877  0.1  0.0   4476  2756 ?        Ds   03:22   1:17 /usr/lib/ssh/sftp-server
>> hasi       55917  0.1  0.0   4516  2848 ?        Ds   03:24   1:16 /usr/lib/ssh/sftp-server
>> hasi       56055  0.1  0.0   4516  2456 ?        Ds   03:27   1:21 /usr/lib/ssh/sftp-server
>> hasi       56075  0.1  0.0   4380  2584 ?        Ds   03:28   1:26 /usr/lib/ssh/sftp-server
>> hasi       56086  0.1  0.0   4512  2712 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
>> hasi       56090  0.1  0.0   4556  2928 ?        Ds   03:28   1:28 /usr/lib/ssh/sftp-server
>> hasi       56231  0.1  0.0   4508  2752 ?        Ds   03:29   1:15 /usr/lib/ssh/sftp-server
>> hasi       57837  0.1  0.0   4408  2500 ?        Ds   03:33   1:15 /usr/lib/ssh/sftp-server
>> hasi       57840  0.1  0.0   4460  2944 ?        Ds   03:33   1:12 /usr/lib/ssh/sftp-server
>> hasi       85960  0.1  0.0   4552  2816 ?        Ds   05:00   1:03 /usr/lib/ssh/sftp-server
>> hasi       85967  0.1  0.0   4508  2648 ?        Ds   05:00   0:41 /usr/lib/ssh/sftp-server
>> hasi       85980  0.1  0.0   4520  2568 ?        Ds   05:00   0:55 /usr/lib/ssh/sftp-server
>> hasi       85985  0.1  0.0   4496  2784 ?        Ds   05:00   0:53 /usr/lib/ssh/sftp-server
>> root       96535  0.8  0.0      0     0 ?        D    05:35   5:46 [kworker/u64:57+flush-btrfs-1]
>> root       96615  0.6  0.0      0     0 ?        D    05:35   4:33 [kworker/u64:80+flush-btrfs-1]
>> root       96624  0.6  0.0      0     0 ?        D    05:35   4:31 [kworker/u64:102+flush-btrfs-1]
>> root      100536  0.6  0.0      0     0 ?        D    05:53   4:25 [kworker/u64:25+flush-btrfs-1]
>> root      101215  0.6  0.0      0     0 ?        D    05:57   3:56 [kworker/u64:52+flush-btrfs-1]
>> root      101293  0.7  0.0      0     0 ?        D    05:57   4:53 [kworker/u64:71+flush-btrfs-1]
>> root      101621  0.6  0.0      0     0 ?        D    05:59   3:56 [kworker/u64:83+flush-btrfs-1]
>> root      101623  0.6  0.0      0     0 ?        D    05:59   3:57 [kworker/u64:88+flush-btrfs-1]
>> root      102161  0.6  0.0      0     0 ?        D    06:01   3:55 [kworker/u64:108+flush-btrfs-1]
>> root      103370  0.5  0.0      0     0 ?        D    06:08   3:43 [kworker/u64:144+flush-btrfs-1]
>> root      108317  0.6  0.0      0     0 ?        D    06:30   3:48 [kworker/u64:24+flush-btrfs-1]
>> root      109537  0.6  0.0      0     0 ?        D    06:37   3:40 [kworker/u64:4+flush-btrfs-1]
>> root      109550  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:26+flush-btrfs-1]
>> root      109553  0.6  0.0      0     0 ?        D    06:37   3:41 [kworker/u64:32+flush-btrfs-1]
>> root      109555  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:34+flush-btrfs-1]
>> root      109559  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:42+flush-btrfs-1]
>> root      109564  0.6  0.0      0     0 ?        D    06:37   3:42 [kworker/u64:48+flush-btrfs-1]
>> root      115503  0.2  0.0      0     0 ?        D    07:37   1:26 [kworker/u64:6+flush-btrfs-1]
>> root      118242  0.1  0.0      0     0 ?        D    08:08   0:47 [kworker/u64:68+flush-btrfs-1]
>> root      119894  0.1  0.0      0     0 ?        D    08:22   0:36 [kworker/u64:14+flush-btrfs-1]
>> root      120633  0.1  0.0      0     0 ?        D    08:28   0:43 [kworker/u64:5+flush-btrfs-1]
>> root      120638  0.0  0.0      0     0 ?        D    08:28   0:20 [kworker/u64:23+flush-btrfs-1]
>> root      120642  0.0  0.0      0     0 ?        D    08:28   0:28 [kworker/u64:35+flush-btrfs-1]
>> root      122280  0.0  0.0      0     0 ?        D    08:36   0:20 [kworker/u64:0+flush-btrfs-1]
>> root      122669  0.0  0.0      0     0 ?        D    08:39   0:11 [kworker/u64:79+flush-btrfs-1]
>> root      123019  0.0  0.0      0     0 ?        D    08:42   0:27 [kworker/u64:85+blkcg_punt_bio]
>> root      123028  0.0  0.0      0     0 ?        D    08:42   0:21 [kworker/u64:95+flush-btrfs-1]
>> root      124094  0.0  0.0      0     0 ?        D    08:51   0:08 [kworker/u64:29+flush-btrfs-1]
>> root      124101  0.0  0.0      0     0 ?        D    08:51   0:20 [kworker/u64:91+flush-btrfs-1]
>> root      124177  0.0  0.0      0     0 ?        D    08:51   0:19 [kworker/u64:106+flush-btrfs-1]
>> root      124418  0.0  0.0      0     0 ?        D    08:53   0:20 [kworker/u64:114+flush-btrfs-1]
>> root      124423  0.0  0.0      0     0 ?        D    08:53   0:15 [kworker/u64:119+flush-btrfs-1]
>> root      124428  0.0  0.0      0     0 ?        D    08:53   0:13 [kworker/u64:124+flush-btrfs-1]
>> root      124628  0.0  0.0      0     0 ?        D    08:55   0:22 [kworker/u64:132+flush-btrfs-1]
>> root      124629  0.0  0.0      0     0 ?        D    08:55   0:20 [kworker/u64:133+flush-btrfs-1]
>> root      124630  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:134+flush-btrfs-1]
>> root      124633  0.0  0.0      0     0 ?        D    08:55   0:05 [kworker/u64:137+flush-btrfs-1]
>> root      124636  0.0  0.0      0     0 ?        D    08:55   0:07 [kworker/u64:140+blkcg_punt_bio]
>> root      124637  0.0  0.0      0     0 ?        D    08:55   0:23 [kworker/u64:141+flush-btrfs-1]
>> work      125055  0.0  0.0 156020 22868 ?        Dl   08:59   0:15 smbd: client [192.168.67.33]
>> root      126064  0.0  0.0      0     0 ?        D    09:07   0:18 [kworker/u64:28+flush-btrfs-1]
>> hasi      127483  0.0  0.0   2604  1776 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      127494  0.0  0.0   2604  1704 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      127501  0.0  0.0   2604  1688 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      127508  0.0  0.0   2604  1772 ?        Ds   09:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      127520  0.0  0.0   2604  1804 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      127527  0.0  0.0   2604  1864 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      127543  0.0  0.0   2604  1732 ?        Ds   09:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      127551  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      127558  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      127565  0.0  0.0   2604  1916 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      127572  0.0  0.0   2604  1700 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      127579  0.0  0.0   2604  1864 ?        Ds   09:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      127600  0.0  0.0   2604  1688 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      127608  0.0  0.0   2604  1700 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      127617  0.0  0.0   2604  1732 ?        Ds   09:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      127624  0.0  0.0   2604  1728 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      127631  0.0  0.0   2604  1908 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      127646  0.0  0.0   2604  1596 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      127654  0.0  0.0   2604  2044 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      127664  0.0  0.0   2604  1732 ?        Ds   09:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      127676  0.0  0.0   2604  1752 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      127683  0.0  0.0   2604  1728 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      127701  0.0  0.0   2604  1776 ?        Ds   09:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      127709  0.0  0.0   2604  1688 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      127716  0.0  0.0   2604  1872 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      127723  0.0  0.0   2604  1700 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      127730  0.0  0.0   2736  1960 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      127741  0.0  0.0   2604  1844 ?        Ds   09:39   0:00 /usr/lib/ssh/sftp-server
>> root      127746  0.0  0.0      0     0 ?        D    09:39   0:00 [kworker/u64:1+flush-btrfs-1]
>> hasi      127760  0.0  0.0   2604  1704 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      127768  0.0  0.0   2604  1872 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      127777  0.0  0.0   2604  1944 ?        Ds   09:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      127784  0.0  0.0   2604  1732 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      127791  0.0  0.0   2604  1712 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      127803  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      127811  0.0  0.0   2604  1596 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      127818  0.0  0.0   2604  1728 ?        Ds   09:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      127830  0.0  0.0   2604  1776 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      127837  0.0  0.0   2604  1772 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      127848  0.0  0.0   2604  1728 ?        Ds   09:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      127856  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      127863  0.0  0.0   2604  1916 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      127870  0.0  0.0   2604  1904 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      127878  0.0  0.0   2604  1700 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      127885  0.0  0.0   2604  1716 ?        Ds   09:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      127899  0.0  0.0   2604  1864 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
>> root      127900  0.0  0.0      0     0 ?        D    09:46   0:00 [kworker/u64:2+flush-btrfs-1]
>> hasi      127908  0.0  0.0   2604  1716 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      127917  0.0  0.0   2604  2072 ?        Ds   09:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      127929  0.0  0.0   2604  1732 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      127936  0.0  0.0   2604  2000 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      127948  0.0  0.0   2604  1776 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      127956  0.0  0.0   2604  1688 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
>> root      127957  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:3+flush-btrfs-1]
>> root      127958  0.0  0.0      0     0 ?        D    09:47   0:00 [kworker/u64:7+flush-btrfs-1]
>> hasi      127965  0.0  0.0   2604  1824 ?        Ds   09:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      127980  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      127987  0.0  0.0   2604  2016 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      128005  0.0  0.0   2604  1704 ?        Ds   09:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      128013  0.0  0.0   2604  1752 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      128020  0.0  0.0   2604  1700 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      128027  0.0  0.0   2604  1732 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      128034  0.0  0.0   2604  1856 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      128043  0.0  0.0   2604  1776 ?        Ds   09:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      128057  0.0  0.0   2604  1992 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
>> root      128062  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:8+flush-btrfs-1]
>> root      128063  0.0  0.0      0     0 ?        D    09:51   0:00 [kworker/u64:9+flush-btrfs-1]
>> hasi      128071  0.0  0.0   2604  1916 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      128080  0.0  0.0   2604  1716 ?        Ds   09:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      128087  0.0  0.0   2604  1864 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      128094  0.0  0.0   2604  1596 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      128104  0.0  0.0   2604  1776 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      128112  0.0  0.0   2604  1772 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      128119  0.0  0.0   2604  1688 ?        Ds   09:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      128131  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      128138  0.0  0.0   2604  1944 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      128149  0.0  0.0   2604  1864 ?        Ds   09:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      128157  0.0  0.0   2604  1804 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      128164  0.0  0.0   2604  1944 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      128171  0.0  0.0   2604  1880 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      128180  0.0  0.0   2604  1716 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      128187  0.0  0.0   2604  1596 ?        Ds   09:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      128201  0.0  0.0   2604  1596 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      128209  0.0  0.0   2740  1904 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      128216  0.0  0.0   4408  3012 ?        Ds   09:56   0:00 /usr/lib/ssh/sftp-server
>> root      128217  0.0  0.0      0     0 ?        D    09:56   0:00 [kworker/u64:10+flush-btrfs-1]
>> hasi      128228  0.0  0.0   2604  1728 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      128235  0.0  0.0   2604  2044 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      128246  0.0  0.0   2604  1844 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
>> root      128250  0.0  0.0      0     0 ?        D    09:57   0:00 [kworker/u64:11+flush-btrfs-1]
>> hasi      128260  0.0  0.0   4408  2696 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      128263  0.0  0.0   2604  1688 ?        Ds   09:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      128277  0.0  0.0   2604  1908 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
>> root      128282  0.0  0.0      0     0 ?        D    09:58   0:00 [kworker/u64:12+flush-btrfs-1]
>> hasi      128289  0.0  0.0   2604  1752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      128303  0.0  0.0   2604  1716 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      128306  0.0  0.0   4408  2752 ?        Ds   09:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      128315  0.0  0.0   2604  1688 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      128322  0.0  0.0   2604  1864 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      128329  0.0  0.0   2604  2072 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      128338  0.0  0.0   2604  1728 ?        Ds   09:59   0:00 /usr/lib/ssh/sftp-server
>> root      128345  0.0  0.0      0     0 ?        D    10:00   0:00 [kworker/u64:13+flush-btrfs-1]
>> hasi      128361  0.0  0.0   2604  2044 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
>> root      128364  0.0  0.0      0     0 ?        D    10:01   0:00 [kworker/u64:15+flush-btrfs-1]
>> hasi      128372  0.0  0.0   2604  1772 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      128381  0.0  0.0   2604  1732 ?        Ds   10:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      128390  0.0  0.0   4408  2752 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      128397  0.0  0.0   2604  1700 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      128400  0.0  0.0   2604  1688 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      128414  0.0  0.0   2604  1596 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      128421  0.0  0.0   2604  1728 ?        Ds   10:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      128433  0.0  0.0   2604  1872 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      128440  0.0  0.0   2604  1772 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      128459  0.0  0.0   2604  1700 ?        Ds   10:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      128466  0.0  0.0   2604  2044 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      128475  0.0  0.0   4456  2944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      128485  0.0  0.0   2604  1688 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      128493  0.0  0.0   2604  1944 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      128500  0.0  0.0   2604  1804 ?        Ds   10:04   0:00 /usr/lib/ssh/sftp-server
>> root      128504  0.0  0.0      0     0 ?        D    10:05   0:00 [kworker/u64:16+flush-btrfs-1]
>> hasi      128515  0.0  0.0   2604  1688 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
>> root      128516  0.0  0.0      0     0 ?        D    10:06   0:00 [kworker/u64:17+flush-btrfs-1]
>> hasi      128524  0.0  0.0   2604  1916 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      128533  0.0  0.0   2604  1752 ?        Ds   10:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      128540  0.0  0.0   2604  1732 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      128547  0.0  0.0   2604  2072 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      128560  0.0  0.0   2604  1688 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      128567  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
>> root      128572  0.0  0.0      0     0 ?        D    10:07   0:00 [kworker/u64:18+flush-btrfs-1]
>> hasi      128576  0.0  0.0   2604  1716 ?        Ds   10:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      128588  0.0  0.0   2604  2000 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      128599  0.0  0.0   2604  1900 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
>> root      128601  0.0  0.0      0     0 ?        D    10:08   0:00 [kworker/u64:19+flush-btrfs-1]
>> hasi      128612  0.0  0.0   2604  1944 ?        Ds   10:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      128619  0.0  0.0   2604  1728 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      128627  0.0  0.0   2604  1596 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      128634  0.0  0.0   2604  1700 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      128641  0.0  0.0   2604  1908 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      128650  0.0  0.0   2604  2016 ?        Ds   10:09   0:00 /usr/lib/ssh/sftp-server
>> root      128652  0.0  0.0      0     0 ?        D    10:09   0:00 [kworker/u64:20+flush-btrfs-1]
>> root      128656  0.0  0.0      0     0 ?        D    10:10   0:00 [kworker/u64:21+flush-btrfs-1]
>> hasi      128667  0.0  0.0   2604  1904 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
>> root      128670  0.0  0.0      0     0 ?        D    10:11   0:00 [kworker/u64:22+flush-btrfs-1]
>> hasi      128678  0.0  0.0   2604  1688 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      128687  0.0  0.0   2604  1732 ?        Ds   10:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      128694  0.0  0.0   2604  1752 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      128701  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      128711  0.0  0.0   2604  1596 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      128718  0.0  0.0   2604  1716 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
>> root      128724  0.0  0.0      0     0 ?        D    10:12   0:00 [kworker/u64:27+flush-btrfs-1]
>> hasi      128727  0.0  0.0   2604  1864 ?        Ds   10:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      128741  0.0  0.0   2604  1776 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      128748  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
>> root      128749  0.0  0.0      0     0 ?        D    10:13   0:00 [kworker/u64:30+flush-btrfs-1]
>> hasi      128760  0.0  0.0   2604  1596 ?        Ds   10:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      128767  0.0  0.0   2604  1908 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      128776  0.0  0.0   2604  1864 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      128786  0.0  0.0   2604  1688 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      128793  0.0  0.0   2604  1872 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      128800  0.0  0.0   2604  1772 ?        Ds   10:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      128817  0.0  0.0   2604  1916 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      128825  0.0  0.0   2604  2072 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      128835  0.0  0.0   2604  2000 ?        Ds   10:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      128843  0.0  0.0   2604  1704 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      128850  0.0  0.0   2604  1716 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      128860  0.0  0.0   2780  1904 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      128869  0.0  0.0   2604  1728 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
>> root      128871  0.0  0.0      0     0 ?        D    10:17   0:00 [kworker/u64:31+flush-btrfs-1]
>> hasi      128878  0.0  0.0   2604  1916 ?        Ds   10:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      128899  0.0  0.0   2604  1880 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      128907  0.0  0.0   2604  1944 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      128919  0.0  0.0   2604  1844 ?        Ds   10:18   0:00 /usr/lib/ssh/sftp-server
>> root      128922  0.0  0.0      0     0 ?        D    10:19   0:00 [kworker/u64:33+flush-btrfs-1]
>> hasi      128929  0.0  0.0   2604  1752 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      128937  0.0  0.0   2604  1596 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      128944  0.0  0.0   2604  1716 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      128951  0.0  0.0   2604  1804 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      128958  0.0  0.0   2604  2072 ?        Ds   10:19   0:00 /usr/lib/ssh/sftp-server
>> root      128965  0.0  0.0      0     0 ?        D    10:20   0:00 [kworker/u64:36+flush-btrfs-1]
>> hasi      128976  0.0  0.0   2604  1716 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
>> root      128977  0.0  0.0      0     0 ?        D    10:21   0:00 [kworker/u64:37+flush-btrfs-1]
>> hasi      128985  0.0  0.0   2604  1772 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      128994  0.0  0.0   2604  1916 ?        Ds   10:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      129001  0.0  0.0   2604  1944 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      129008  0.0  0.0   2604  1916 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      129018  0.0  0.0   2604  1804 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      129025  0.0  0.0   2604  1704 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      129033  0.0  0.0   2604  1728 ?        Ds   10:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      129045  0.0  0.0   2604  1688 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      129052  0.0  0.0   2604  1732 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      129063  0.0  0.0   2604  1872 ?        Ds   10:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      129070  0.0  0.0   2604  1968 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
>> root      129074  0.0  0.0      0     0 ?        D    10:24   0:00 [kworker/u64:38+flush-btrfs-1]
>> hasi      129082  0.0  0.0   2604  1804 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      129089  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      129096  0.0  0.0   2604  1716 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      129103  0.0  0.0   2604  1688 ?        Ds   10:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      129117  0.0  0.0   2604  1732 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      129125  0.0  0.0   2604  2000 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      129139  0.0  0.0   2604  1776 ?        Ds   10:26   0:00 /usr/lib/ssh/sftp-server
>> work      129145  0.0  0.0  97816 22024 ?        Dl   10:26   0:00 smbd: client [192.168.67.33]
>> root      129161  0.0  0.0      0     0 ?        D    10:27   0:00 [kworker/u64:39+flush-btrfs-1]
>> hasi      129168  0.0  0.0   2604  1716 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      129175  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      129185  0.0  0.0   2604  1864 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      129192  0.0  0.0   2604  1844 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      129203  0.0  0.0   2604  1688 ?        Ds   10:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      129216  0.0  0.0   2604  1688 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      129223  0.0  0.0   2604  1992 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      129236  0.0  0.0   2604  2072 ?        Ds   10:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      129246  0.0  0.0   2604  1596 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      129255  0.0  0.0   2604  1944 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      129262  0.0  0.0   2604  1716 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
>> root      129263  0.0  0.0      0     0 ?        D    10:29   0:00 [kworker/u64:40+flush-btrfs-1]
>> hasi      129270  0.0  0.0   2604  1700 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      129277  0.0  0.0   2604  1772 ?        Ds   10:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      129292  0.0  0.0   2604  1776 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      129300  0.0  0.0   2604  1732 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      129309  0.0  0.0   2604  1700 ?        Ds   10:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      129319  0.0  0.0   2604  1880 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      129330  0.0  0.0   2604  1864 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      129341  0.0  0.0   2604  1688 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      129348  0.0  0.0   2604  1716 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      129355  0.0  0.0   2604  1772 ?        Ds   10:32   0:00 /usr/lib/ssh/sftp-server
>> root      129362  0.0  0.0      0     0 ?        D    10:33   0:00 [kworker/u64:41+flush-btrfs-1]
>> hasi      129369  0.0  0.0   2604  1944 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      129376  0.0  0.0   2604  1776 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      129394  0.0  0.0   2604  1752 ?        Ds   10:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      129401  0.0  0.0   2604  1864 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      129409  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      129416  0.0  0.0   2604  1900 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      129425  0.0  0.0   2604  1944 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      129432  0.0  0.0   2604  1728 ?        Ds   10:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      129450  0.0  0.0   2604  1700 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      129458  0.0  0.0   2604  1856 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      129472  0.0  0.0   2604  1752 ?        Ds   10:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      129479  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
>> root      129480  0.0  0.0      0     0 ?        D    10:37   0:00 [kworker/u64:43+flush-btrfs-1]
>> hasi      129487  0.0  0.0   2604  1704 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      129497  0.0  0.0   2604  1732 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      129504  0.0  0.0   2604  1856 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      129512  0.0  0.0   2604  1700 ?        Ds   10:37   0:00 /usr/lib/ssh/sftp-server
>> root      129519  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:44+flush-btrfs-1]
>> hasi      129526  0.0  0.0   2604  2044 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      129535  0.0  0.0   2604  1752 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
>> root      129541  0.0  0.0      0     0 ?        D    10:38   0:00 [kworker/u64:45+flush-btrfs-1]
>> hasi      129554  0.0  0.0   2736  1772 ?        Ds   10:38   0:00 /usr/lib/ssh/sftp-server
>> root      129560  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:46+flush-btrfs-1]
>> hasi      129567  0.0  0.0   2604  1916 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      129575  0.0  0.0   2604  1872 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      129582  0.0  0.0   2604  1688 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      129589  0.0  0.0   2604  1972 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      129597  0.0  0.0   2604  1596 ?        Ds   10:39   0:00 /usr/lib/ssh/sftp-server
>> root      129598  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:47+flush-btrfs-1]
>> root      129599  0.0  0.0      0     0 ?        D    10:39   0:00 [kworker/u64:49+flush-btrfs-1]
>> hasi      129613  0.0  0.0   2604  1916 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      129622  0.0  0.0   2604  1728 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      129631  0.0  0.0   2604  1944 ?        Ds   10:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      129638  0.0  0.0   2604  1960 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      129647  0.0  0.0   2604  1916 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      129657  0.0  0.0   2604  1776 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      129664  0.0  0.0   2604  1728 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      129671  0.0  0.0   2604  1716 ?        Ds   10:42   0:00 /usr/lib/ssh/sftp-server
>> root      129679  0.0  0.0      0     0 ?        D    10:43   0:00 [kworker/u64:50+flush-btrfs-1]
>> hasi      129686  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      129693  0.0  0.0   2604  1596 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      129704  0.0  0.0   2604  1688 ?        Ds   10:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      129711  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      129719  0.0  0.0   2604  1700 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      129726  0.0  0.0   2604  1944 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      129733  0.0  0.0   2604  1620 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      129740  0.0  0.0   2604  1864 ?        Ds   10:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      129754  0.0  0.0   2604  1776 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      129763  0.0  0.0   2604  2000 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      129776  0.0  0.0   2604  2016 ?        Ds   10:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      129784  0.0  0.0   2604  1696 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      129791  0.0  0.0   2604  2000 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      129803  0.0  0.0   2604  1716 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      129810  0.0  0.0   2604  1700 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
>> root      129811  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:51+flush-btrfs-1]
>> root      129812  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:53+flush-btrfs-1]
>> hasi      129819  0.0  0.0   2604  1688 ?        Ds   10:47   0:00 /usr/lib/ssh/sftp-server
>> root      129823  0.0  0.0      0     0 ?        D    10:47   0:00 [kworker/u64:54+flush-btrfs-1]
>> hasi      129836  0.0  0.0   2604  1716 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      129843  0.0  0.0   2604  1856 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      129856  0.0  0.0   2604  1992 ?        Ds   10:48   0:00 /usr/lib/ssh/sftp-server
>> root      129861  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:55+flush-btrfs-1]
>> root      129862  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:56+flush-btrfs-1]
>> root      129863  0.0  0.0      0     0 ?        D    10:49   0:00 [kworker/u64:58+flush-btrfs-1]
>> hasi      129870  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      129878  0.0  0.0   2604  1804 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      129885  0.0  0.0   2604  1872 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      129892  0.0  0.0   2604  1904 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      129902  0.0  0.0   2604  1688 ?        Ds   10:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      129917  0.0  0.0   2604  1596 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      129925  0.0  0.0   2604  1772 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      129934  0.0  0.0   2604  1944 ?        Ds   10:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      129941  0.0  0.0   2604  1776 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      129949  0.0  0.0   2604  1716 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      129958  0.0  0.0   2604  1596 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      129965  0.0  0.0   2604  1728 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      129972  0.0  0.0   2604  1752 ?        Ds   10:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      129985  0.0  0.0   2604  1772 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      129992  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      130004  0.0  0.0   2604  1864 ?        Ds   10:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      130011  0.0  0.0   2604  1844 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      130020  0.0  0.0   2604  1732 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      130027  0.0  0.0   2604  1596 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      130034  0.0  0.0   2604  1752 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      130041  0.0  0.0   2604  1772 ?        Ds   10:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      130055  0.0  0.0   2604  1700 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      130063  0.0  0.0   2604  1992 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      130079  0.0  0.0   2604  1916 ?        Ds   10:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      130086  0.0  0.0   2604  1596 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      130097  0.0  0.0   2604  1688 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      130105  0.0  0.0   2604  1732 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      130109  0.0  0.0   2604  1752 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      130114  0.0  0.0   2604  1916 ?        Ds   10:57   0:00 /usr/lib/ssh/sftp-server
>> root      130127  0.0  0.0      0     0 ?        D    10:57   0:00 [kworker/u64:59+flush-btrfs-1]
>> hasi      130138  0.0  0.0   2604  1716 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      130145  0.0  0.0   2604  1944 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      130156  0.0  0.0   2604  1900 ?        Ds   10:58   0:00 /usr/lib/ssh/sftp-server
>> root      130161  0.0  0.0      0     0 ?        D    10:59   0:00 [kworker/u64:60+flush-btrfs-1]
>> hasi      130168  0.0  0.0   2604  1872 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      130176  0.0  0.0   2604  1916 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      130183  0.0  0.0   2604  1688 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      130190  0.0  0.0   2604  1832 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      130199  0.0  0.0   2604  1704 ?        Ds   10:59   0:00 /usr/lib/ssh/sftp-server
>> root      130206  0.0  0.0      0     0 ?        D    11:00   0:00 [kworker/u64:61+flush-btrfs-1]
>> hasi      130222  0.0  0.0   2604  1776 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      130232  0.0  0.0   2604  1728 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      130239  0.0  0.0   2604  1772 ?        Ds   11:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      130246  0.0  0.0   2604  1752 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      130254  0.0  0.0   2604  1716 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      130263  0.0  0.0   2604  1864 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      130270  0.0  0.0   2604  1872 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      130279  0.0  0.0   2604  1700 ?        Ds   11:02   0:00 /usr/lib/ssh/sftp-server
>> root      130281  0.0  0.0      0     0 ?        D    11:02   0:00 [kworker/u64:62+flush-btrfs-1]
>> hasi      130291  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      130298  0.0  0.0   2604  1804 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      130310  0.0  0.0   2604  1772 ?        Ds   11:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      130322  0.0  0.0   2604  1916 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      130332  0.0  0.0   2604  1864 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      130339  0.0  0.0   2604  1704 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      130349  0.0  0.0   2604  1700 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      130356  0.0  0.0   2604  1716 ?        Ds   11:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      130370  0.0  0.0   2604  1864 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      130378  0.0  0.0   2604  1880 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      130391  0.0  0.0   2604  2016 ?        Ds   11:06   0:00 /usr/lib/ssh/sftp-server
>> root      130393  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:63+flush-btrfs-1]
>> root      130394  0.0  0.0      0     0 ?        D    11:07   0:00 [kworker/u64:64+flush-btrfs-1]
>> hasi      130402  0.0  0.0   2604  1704 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      130410  0.0  0.0   2604  2072 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      130420  0.0  0.0   2604  1716 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      130427  0.0  0.0   2604  1700 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      130436  0.0  0.0   2604  1864 ?        Ds   11:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      130448  0.0  0.0   2604  1904 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      130458  0.0  0.0   2604  1844 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      130470  0.0  0.0   2604  1856 ?        Ds   11:08   0:00 /usr/lib/ssh/sftp-server
>> root      130475  0.0  0.0      0     0 ?        D    11:09   0:00 [kworker/u64:65+flush-btrfs-1]
>> hasi      130482  0.0  0.0   2604  1916 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      130490  0.0  0.0   2604  1688 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      130497  0.0  0.0   2604  1732 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      130504  0.0  0.0   2604  1872 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      130511  0.0  0.0   2604  1944 ?        Ds   11:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      130525  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      130533  0.0  0.0   2604  1872 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      130542  0.0  0.0   2604  1716 ?        Ds   11:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      130549  0.0  0.0   2604  1872 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      130557  0.0  0.0   2604  1804 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      130566  0.0  0.0   2604  1772 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      130573  0.0  0.0   2604  1728 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      130582  0.0  0.0   2604  1716 ?        Ds   11:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      130593  0.0  0.0   2604  1864 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      130600  0.0  0.0   2604  1716 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      130611  0.0  0.0   2604  1776 ?        Ds   11:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      130618  0.0  0.0   2604  1900 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      130630  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      130637  0.0  0.0   2604  1700 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      130644  0.0  0.0   2604  1596 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      130651  0.0  0.0   2604  1872 ?        Ds   11:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      130668  0.0  0.0   2604  1772 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      130676  0.0  0.0   2700  1988 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      130693  0.0  0.0   2604  1716 ?        Ds   11:16   0:00 /usr/lib/ssh/sftp-server
>> root      130694  0.0  0.0      0     0 ?        D    11:17   0:00 [kworker/u64:66+flush-btrfs-1]
>> hasi      130701  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      130709  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      130718  0.0  0.0   2604  1716 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      130725  0.0  0.0   2604  1864 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      130734  0.0  0.0   2604  1776 ?        Ds   11:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      130745  0.0  0.0   2604  1732 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      130752  0.0  0.0   2604  2072 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
>> root      130767  0.0  0.0      0     0 ?        D    11:18   0:00 [kworker/u64:67+flush-btrfs-1]
>> hasi      130775  0.0  0.0   2604  1904 ?        Ds   11:18   0:00 /usr/lib/ssh/sftp-server
>> root      130781  0.0  0.0      0     0 ?        D    11:19   0:00 [kworker/u64:69+flush-btrfs-1]
>> hasi      130788  0.0  0.0   2604  1716 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      130796  0.0  0.0   2604  1864 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      130803  0.0  0.0   2604  1776 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      130810  0.0  0.0   2604  1944 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      130817  0.0  0.0   2604  1916 ?        Ds   11:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      130831  0.0  0.0   2604  1704 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      130841  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      130848  0.0  0.0   2604  1944 ?        Ds   11:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      130855  0.0  0.0   2604  1828 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      130870  0.0  0.0   2604  1872 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
>> root      130871  0.0  0.0      0     0 ?        D    11:22   0:00 [kworker/u64:70+flush-btrfs-1]
>> hasi      130880  0.0  0.0   2604  1688 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      130887  0.0  0.0   2604  1776 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      130896  0.0  0.0   2604  1804 ?        Ds   11:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      130907  0.0  0.0   2604  1752 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      130914  0.0  0.0   2604  1728 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
>> root      130915  0.0  0.0      0     0 ?        D    11:23   0:00 [kworker/u64:72+flush-btrfs-1]
>> hasi      130926  0.0  0.0   2604  1732 ?        Ds   11:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      130933  0.0  0.0   2604  1596 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      130944  0.0  0.0   4408  2808 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      130947  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      130955  0.0  0.0   2604  1728 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      130962  0.0  0.0   2604  1872 ?        Ds   11:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      130977  0.0  0.0   2604  1804 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      130985  0.0  0.0   2604  1900 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      130995  0.0  0.0   2604  1704 ?        Ds   11:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      131024  0.0  0.0   2604  1716 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      131031  0.0  0.0   2604  1804 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      131040  0.0  0.0   2676  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      131048  0.0  0.0   2604  1904 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
>> root      131064  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:73+flush-btrfs-1]
>> hasi      131065  0.0  0.0   2604  1688 ?        Ds   11:27   0:00 /usr/lib/ssh/sftp-server
>> root      131067  0.0  0.0      0     0 ?        D    11:27   0:00 [kworker/u64:74+flush-btrfs-1]
>> hasi      131077  0.0  0.0   2604  2044 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      131085  0.0  0.0   2604  1804 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      131096  0.0  0.0   2604  1752 ?        Ds   11:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      131103  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      131112  0.0  0.0   2604  1916 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      131119  0.0  0.0   2604  1804 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      131126  0.0  0.0   2604  1704 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      131133  0.0  0.0   2676  2000 ?        Ds   11:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      131150  0.0  0.0   2604  1752 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      131158  0.0  0.0   2604  1872 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      131167  0.0  0.0   2604  1596 ?        Ds   11:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      131178  0.0  0.0   2604  1844 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
>> root      131184  0.0  0.0      0     0 ?        D    11:32   0:00 [kworker/u64:75+flush-btrfs-1]
>> hasi      131191  0.0  0.0   2604  1944 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      131200  0.0  0.0   2604  1596 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      131207  0.0  0.0   2604  1916 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      131216  0.0  0.0   2604  1732 ?        Ds   11:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      131227  0.0  0.0   2604  1728 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      131234  0.0  0.0   2604  1864 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      131250  0.0  0.0   2604  1716 ?        Ds   11:33   0:00 /usr/lib/ssh/sftp-server
>> root      131251  0.0  0.0      0     0 ?        D    11:34   0:00 [kworker/u64:76+flush-btrfs-1]
>> hasi      131258  0.0  0.0   2604  2004 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      131267  0.0  0.0   2604  1916 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      131274  0.0  0.0   2604  1772 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      131281  0.0  0.0   2604  1728 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      131288  0.0  0.0   2604  1872 ?        Ds   11:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      131301  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      131309  0.0  0.0   2604  1728 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      131318  0.0  0.0   2604  1872 ?        Ds   11:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      131326  0.0  0.0   2604  1944 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      131333  0.0  0.0   2676  1844 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      131344  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      131351  0.0  0.0   2604  1880 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
>> root      131357  0.0  0.0      0     0 ?        D    11:37   0:00 [kworker/u64:77+flush-btrfs-1]
>> hasi      131367  0.0  0.0   2604  1728 ?        Ds   11:37   0:00 /usr/lib/ssh/sftp-server
>> root      131372  0.0  0.0      0     0 ?        D    11:38   0:00 [kworker/u64:78+flush-btrfs-1]
>> hasi      131379  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      131386  0.0  0.0   2604  1944 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      131397  0.0  0.0   2604  1804 ?        Ds   11:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      131404  0.0  0.0   2604  1716 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      131412  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      131419  0.0  0.0   2604  1704 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      131426  0.0  0.0   2604  1900 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      131435  0.0  0.0   2604  1700 ?        Ds   11:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      131456  0.0  0.0   2604  1904 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      131464  0.0  0.0   4408  2968 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
>> root      131465  0.0  0.0      0     0 ?        D    11:41   0:00 [kworker/u64:81+flush-btrfs-1]
>> hasi      131476  0.0  0.0   2604  1700 ?        Ds   11:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      131484  0.0  0.0   2604  2000 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      131495  0.0  0.0   2604  1728 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      131504  0.0  0.0   2604  1804 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      131511  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      131520  0.0  0.0   2604  1776 ?        Ds   11:42   0:00 /usr/lib/ssh/sftp-server
>> root      131522  0.0  0.0      0     0 ?        D    11:42   0:00 [kworker/u64:82+flush-btrfs-1]
>> hasi      131532  0.0  0.0   2604  1944 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      131539  0.0  0.0   2604  1716 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      131550  0.0  0.0   2604  1704 ?        Ds   11:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      131557  0.0  0.0   2604  1776 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      131565  0.0  0.0   2604  1864 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      131572  0.0  0.0   2604  1688 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      131579  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      131586  0.0  0.0   2604  1716 ?        Ds   11:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      131600  0.0  0.0   2604  1864 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      131608  0.0  0.0   2604  1688 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      131617  0.0  0.0   2604  2000 ?        Ds   11:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      131630  0.0  0.0   2604  1944 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      131637  0.0  0.0   2604  1700 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      131646  0.0  0.0   2604  2000 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
>> root      131654  0.0  0.0      0     0 ?        D    11:47   0:00 [kworker/u64:84+flush-btrfs-1]
>> hasi      131655  0.0  0.0   2604  1904 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      131667  0.0  0.0   2604  1776 ?        Ds   11:47   0:00 /usr/lib/ssh/sftp-server
>> root      131674  0.0  0.0      0     0 ?        D    11:48   0:00 [kworker/u64:86+flush-btrfs-1]
>> hasi      131681  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      131688  0.0  0.0   2604  1716 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      131700  0.0  0.0   2604  1776 ?        Ds   11:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      131707  0.0  0.0   2604  1596 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      131715  0.0  0.0   2604  1716 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      131722  0.0  0.0   2604  1864 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      131729  0.0  0.0   2604  2072 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      131741  0.0  0.0   2604  1732 ?        Ds   11:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      131757  0.0  0.0   2604  1844 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
>> root      131761  0.0  0.0      0     0 ?        D    11:51   0:00 [kworker/u64:87+flush-btrfs-1]
>> hasi      131769  0.0  0.0   2604  1716 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      131778  0.0  0.0   2604  1752 ?        Ds   11:51   0:00 /usr/lib/ssh/sftp-server
>> root      131779  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:89+flush-btrfs-1]
>> hasi      131787  0.0  0.0   2604  1944 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      131794  0.0  0.0   2604  1728 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      131803  0.0  0.0   2604  1716 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      131810  0.0  0.0   2604  1596 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      131819  0.0  0.0   2604  1864 ?        Ds   11:52   0:00 /usr/lib/ssh/sftp-server
>> root      131822  0.0  0.0      0     0 ?        D    11:52   0:00 [kworker/u64:90+flush-btrfs-1]
>> hasi      131831  0.0  0.0   2604  1804 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      131838  0.0  0.0   2604  1752 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      131849  0.0  0.0   2604  1596 ?        Ds   11:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      131856  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      131864  0.0  0.0   2604  1728 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      131871  0.0  0.0   2604  1916 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      131878  0.0  0.0   2604  1944 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      131885  0.0  0.0   2604  1732 ?        Ds   11:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      131900  0.0  0.0   2604  1728 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      131908  0.0  0.0   2604  1864 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      131917  0.0  0.0   2676  1828 ?        Ds   11:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      131930  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      131937  0.0  0.0   2604  1772 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      131946  0.0  0.0   2604  2016 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      131956  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      131965  0.0  0.0   2604  1716 ?        Ds   11:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      131998  0.0  0.0   2604  1716 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      132005  0.0  0.0   2604  1872 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      132016  0.0  0.0   2604  1772 ?        Ds   11:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      132023  0.0  0.0   2604  1908 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      132031  0.0  0.0   2604  1728 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      132038  0.0  0.0   2604  1804 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      132045  0.0  0.0   2604  1844 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      132057  0.0  0.0   2604  1716 ?        Ds   11:59   0:00 /usr/lib/ssh/sftp-server
>> root      132058  0.0  0.0      0     0 ?        D    11:59   0:00 [kworker/u64:92+flush-btrfs-1]
>> hasi      132080  0.0  0.0   2604  1908 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      132093  0.0  0.0   2604  1916 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
>> root      132094  0.0  0.0      0     0 ?        D    12:01   0:00 [kworker/u64:93+flush-btrfs-1]
>> hasi      132101  0.0  0.0   2604  1944 ?        Ds   12:01   0:00 /usr/lib/ssh/sftp-server
>> root      132103  0.0  0.0      0     0 ?        D    12:02   0:00 [kworker/u64:94+flush-btrfs-1]
>> hasi      132110  0.0  0.0   2604  1716 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      132117  0.0  0.0   2604  1704 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      132126  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      132133  0.0  0.0   2604  1944 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      132142  0.0  0.0   2604  1688 ?        Ds   12:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      132153  0.0  0.0   2604  1772 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      132160  0.0  0.0   2604  1916 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      132172  0.0  0.0   2604  1716 ?        Ds   12:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      132184  0.0  0.0   2604  1728 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      132192  0.0  0.0   2604  1712 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      132201  0.0  0.0   2604  1864 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      132208  0.0  0.0   2604  1596 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      132215  0.0  0.0   2604  1944 ?        Ds   12:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      132229  0.0  0.0   2604  1804 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      132237  0.0  0.0   2604  1728 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      132246  0.0  0.0   2604  1932 ?        Ds   12:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      132261  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      132268  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      132277  0.0  0.0   2676  1816 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      132285  0.0  0.0   2604  1772 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
>> root      132286  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:96+flush-btrfs-1]
>> hasi      132295  0.0  0.0   2604  1804 ?        Ds   12:07   0:00 /usr/lib/ssh/sftp-server
>> root      132296  0.0  0.0      0     0 ?        D    12:07   0:00 [kworker/u64:97+flush-btrfs-1]
>> hasi      132307  0.0  0.0   2604  1728 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      132314  0.0  0.0   2604  1732 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      132325  0.0  0.0   2604  1932 ?        Ds   12:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      132334  0.0  0.0   2604  1716 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      132342  0.0  0.0   2604  1596 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      132349  0.0  0.0   2604  1804 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      132356  0.0  0.0   2604  1844 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      132368  0.0  0.0   2604  1932 ?        Ds   12:09   0:00 /usr/lib/ssh/sftp-server
>> root      132369  0.0  0.0      0     0 ?        D    12:09   0:00 [kworker/u64:98+flush-btrfs-1]
>> hasi      132383  0.0  0.0   2604  1772 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
>> root      132384  0.0  0.0      0     0 ?        D    12:11   0:00 [kworker/u64:99+flush-btrfs-1]
>> hasi      132392  0.0  0.0   2604  1688 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      132401  0.0  0.0   2604  1728 ?        Ds   12:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      132409  0.0  0.0   2604  1864 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      132416  0.0  0.0   2604  1596 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      132425  0.0  0.0   2604  1752 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      132432  0.0  0.0   2604  1700 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      132441  0.0  0.0   2604  1704 ?        Ds   12:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      132452  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      132459  0.0  0.0   2604  1596 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      132470  0.0  0.0   2604  1944 ?        Ds   12:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      132477  0.0  0.0   2604  1752 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      132485  0.0  0.0   2604  1944 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      132492  0.0  0.0   2604  1776 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      132499  0.0  0.0   2604  1864 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      132506  0.0  0.0   2604  1772 ?        Ds   12:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      132523  0.0  0.0   2604  1772 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      132531  0.0  0.0   2604  1904 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      132542  0.0  0.0   2604  2000 ?        Ds   12:16   0:00 /usr/lib/ssh/sftp-server
>> root      132556  0.0  0.0      0     0 ?        D    12:17   0:00 [kworker/u64:100+flush-btrfs-1]
>> hasi      132557  0.0  0.0   2604  1872 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      132564  0.0  0.0   2604  1728 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      132578  0.0  0.0   2604  1704 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      132585  0.0  0.0   2604  1916 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      132594  0.0  0.0   2604  1864 ?        Ds   12:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      132608  0.0  0.0   2604  1816 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      132617  0.0  0.0   2604  1772 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
>> root      132618  0.0  0.0      0     0 ?        D    12:18   0:00 [kworker/u64:101+flush-btrfs-1]
>> hasi      132630  0.0  0.0   2604  1932 ?        Ds   12:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      132640  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      132648  0.0  0.0   2604  1752 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      132655  0.0  0.0   2604  1916 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      132662  0.0  0.0   2604  1844 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      132672  0.0  0.0   2604  1772 ?        Ds   12:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      132686  0.0  0.0   2604  1960 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      132697  0.0  0.0   2604  1776 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      132704  0.0  0.0   2604  1752 ?        Ds   12:21   0:00 /usr/lib/ssh/sftp-server
>> root      132705  0.0  0.0      0     0 ?        D    12:22   0:00 [kworker/u64:103+flush-btrfs-1]
>> hasi      132713  0.0  0.0   2604  1872 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      132720  0.0  0.0   2604  1716 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      132729  0.0  0.0   2604  1700 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      132736  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      132745  0.0  0.0   2604  1732 ?        Ds   12:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      132756  0.0  0.0   2604  1596 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      132763  0.0  0.0   2604  1916 ?        Ds   12:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      132774  0.0  0.0   2604  1688 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      132781  0.0  0.0   2604  1700 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      132789  0.0  0.0   2604  2016 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      132800  0.0  0.0   2604  1732 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      132807  0.0  0.0   2604  1728 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      132814  0.0  0.0   2604  1804 ?        Ds   12:24   0:00 /usr/lib/ssh/sftp-server
>> root      132815  0.0  0.0      0     0 ?        D    12:24   0:00 [kworker/u64:104+flush-btrfs-1]
>> hasi      132829  0.0  0.0   2604  1704 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      132839  0.0  0.0   2740  2000 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      132847  0.0  0.0   2604  1960 ?        Ds   12:26   0:00 /usr/lib/ssh/sftp-server
>> root      132851  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:105+flush-btrfs-1]
>> root      132852  0.0  0.0      0     0 ?        D    12:27   0:00 [kworker/u64:107+flush-btrfs-1]
>> hasi      132860  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      132867  0.0  0.0   2604  1688 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      132876  0.0  0.0   2604  1944 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      132883  0.0  0.0   2604  1856 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      132893  0.0  0.0   2604  1596 ?        Ds   12:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      132925  0.0  0.0   2604  1932 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      132935  0.0  0.0   2604  1716 ?        Ds   12:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      132946  0.0  0.0   2604  2044 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> root      132948  0.0  0.0      0     0 ?        D    12:29   0:00 [kworker/u64:109+flush-btrfs-1]
>> hasi      132955  0.0  0.0   2604  1716 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      132963  0.0  0.0   2604  1732 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      132970  0.0  0.0   2604  1596 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      132977  0.0  0.0   2604  1804 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      132985  0.0  0.0   2604  2016 ?        Ds   12:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      133001  0.0  0.0   2604  1944 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      133011  0.0  0.0   2604  1916 ?        Ds   12:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      133018  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> root      133022  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:110+flush-btrfs-1]
>> hasi      133030  0.0  0.0   2604  1776 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      133037  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> root      133038  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:111+flush-btrfs-1]
>> hasi      133047  0.0  0.0   2604  1728 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      133054  0.0  0.0   2604  1704 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> root      133055  0.0  0.0      0     0 ?        D    12:32   0:00 [kworker/u64:112+flush-btrfs-1]
>> hasi      133064  0.0  0.0   2604  1700 ?        Ds   12:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      133075  0.0  0.0   2604  1772 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      133082  0.0  0.0   2604  1596 ?        Ds   12:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      133098  0.0  0.0   2604  1596 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133106  0.0  0.0   2604  2016 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133115  0.0  0.0   2604  1944 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133124  0.0  0.0   2604  1732 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133131  0.0  0.0   2604  1716 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133138  0.0  0.0   2604  1772 ?        Ds   12:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      133157  0.0  0.0   2604  1864 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      133167  0.0  0.0   2604  1844 ?        Ds   12:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      133175  0.0  0.0   2604  1900 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133188  0.0  0.0   2604  1732 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133195  0.0  0.0   2604  1824 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133204  0.0  0.0   2604  1704 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133211  0.0  0.0   2604  1944 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133221  0.0  0.0   2604  2044 ?        Ds   12:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      133234  0.0  0.0   2604  1716 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      133241  0.0  0.0   2604  1972 ?        Ds   12:38   0:00 /usr/lib/ssh/sftp-server
>> root      133243  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:113+flush-btrfs-1]
>> root      133249  0.0  0.0      0     0 ?        D    12:38   0:00 [kworker/u64:115+flush-btrfs-1]
>> hasi      133262  0.0  0.0   2604  1872 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      133269  0.0  0.0   2604  1704 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      133277  0.0  0.0   2604  1804 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      133284  0.0  0.0   2604  1716 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      133291  0.0  0.0   2604  1900 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      133299  0.0  0.0   2604  1904 ?        Ds   12:39   0:00 /usr/lib/ssh/sftp-server
>> root      133310  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:116+flush-btrfs-1]
>> hasi      133318  0.0  0.0   2604  2016 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
>> root      133322  0.0  0.0      0     0 ?        D    12:41   0:00 [kworker/u64:117+flush-btrfs-1]
>> hasi      133332  0.0  0.0   2604  1716 ?        Ds   12:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      133339  0.0  0.0   2604  1688 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      133348  0.0  0.0   2604  1900 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      133357  0.0  0.0   2604  1772 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      133367  0.0  0.0   2604  1864 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      133374  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      133383  0.0  0.0   2604  1804 ?        Ds   12:42   0:00 /usr/lib/ssh/sftp-server
>> root      133388  0.0  0.0      0     0 ?        D    12:42   0:00 [kworker/u64:118+flush-btrfs-1]
>> root      133389  0.0  0.0      0     0 ?        D    12:43   0:00 [kworker/u64:120+flush-btrfs-1]
>> hasi      133396  0.0  0.0   2604  1728 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      133403  0.0  0.0   2604  1596 ?        Ds   12:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      133414  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133421  0.0  0.0   2604  1864 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133429  0.0  0.0   2604  1872 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133436  0.0  0.0   2604  1596 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133443  0.0  0.0   2604  1752 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133450  0.0  0.0   2604  1688 ?        Ds   12:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      133464  0.0  0.0   2604  1916 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      133474  0.0  0.0   2604  1732 ?        Ds   12:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      133481  0.0  0.0   2604  1972 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      133501  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      133504  0.0  0.0   2604  1772 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> root      133505  0.0  0.0      0     0 ?        D    12:47   0:00 [kworker/u64:121+flush-btrfs-1]
>> hasi      133514  0.0  0.0   2604  1688 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      133521  0.0  0.0   2604  1932 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      133533  0.0  0.0   2604  1752 ?        Ds   12:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      133547  0.0  0.0   2604  1776 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      133554  0.0  0.0   2604  1908 ?        Ds   12:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      133568  0.0  0.0   2604  1944 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      133575  0.0  0.0   2604  1864 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> root      133576  0.0  0.0      0     0 ?        D    12:49   0:00 [kworker/u64:122+flush-btrfs-1]
>> hasi      133584  0.0  0.0   2604  1916 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      133591  0.0  0.0   2604  1596 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      133598  0.0  0.0   2604  2016 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      133606  0.0  0.0   2604  2000 ?        Ds   12:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      133624  0.0  0.0   2604  1704 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      133634  0.0  0.0   2604  1716 ?        Ds   12:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      133641  0.0  0.0   2604  1704 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      133652  0.0  0.0   2604  1696 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      133656  0.0  0.0   2604  1904 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      133667  0.0  0.0   2604  1872 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      133674  0.0  0.0   2604  1944 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> root      133675  0.0  0.0      0     0 ?        D    12:52   0:00 [kworker/u64:123+flush-btrfs-1]
>> hasi      133684  0.0  0.0   2604  1772 ?        Ds   12:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      133695  0.0  0.0   2604  1716 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      133702  0.0  0.0   2604  1596 ?        Ds   12:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      133713  0.0  0.0   2604  1704 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      133721  0.0  0.0   2604  1688 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> root      133722  0.0  0.0      0     0 ?        D    12:54   0:00 [kworker/u64:125+flush-btrfs-1]
>> hasi      133730  0.0  0.0   2604  1804 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      133737  0.0  0.0   2736  1908 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      133746  0.0  0.0   2604  1732 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      133753  0.0  0.0   2604  1716 ?        Ds   12:54   0:00 /usr/lib/ssh/sftp-server
>> root      133757  0.0  0.0      0     0 ?        D    12:55   0:00 [kworker/u64:126+flush-btrfs-1]
>> hasi      133769  0.0  0.0   2604  1688 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      133779  0.0  0.0   2604  1596 ?        Ds   12:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      133786  0.0  0.0   2604  1880 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      133796  0.0  0.0   2604  1716 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      133803  0.0  0.0   2604  1872 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      133812  0.0  0.0   2604  1916 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      133819  0.0  0.0   2604  1900 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      133830  0.0  0.0   2604  1772 ?        Ds   12:57   0:00 /usr/lib/ssh/sftp-server
>> root      133835  0.0  0.0      0     0 ?        D    12:57   0:00 [kworker/u64:127+flush-btrfs-1]
>> hasi      133842  0.0  0.0   2776  1952 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      133851  0.0  0.0   2604  1872 ?        Ds   12:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      133863  0.0  0.0   2604  2072 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      133874  0.0  0.0   2604  1916 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      133882  0.0  0.0   2604  1804 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      133889  0.0  0.0   2604  1732 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      133896  0.0  0.0   2604  1752 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      133903  0.0  0.0   2604  1700 ?        Ds   12:59   0:00 /usr/lib/ssh/sftp-server
>> root      133917  0.0  0.0      0     0 ?        D    13:01   0:00 [kworker/u64:128+flush-btrfs-1]
>> hasi      133925  0.0  0.0   2604  1752 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      133935  0.0  0.0   2604  1728 ?        Ds   13:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      133942  0.0  0.0   2604  1688 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      133950  0.0  0.0   2604  1716 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      133957  0.0  0.0   2604  2016 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      133970  0.0  0.0   2604  1700 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      133977  0.0  0.0   2604  1916 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      133986  0.0  0.0   2604  1804 ?        Ds   13:02   0:00 /usr/lib/ssh/sftp-server
>> root      133991  0.0  0.0      0     0 ?        D    13:02   0:00 [kworker/u64:129+flush-btrfs-1]
>> hasi      133998  0.0  0.0   2604  1776 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      134005  0.0  0.0   2604  1704 ?        Ds   13:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      134017  0.0  0.0   2604  1776 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      134027  0.0  0.0   4408  2808 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      134030  0.0  0.0   2604  1704 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      134047  0.0  0.0   2604  1700 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      134058  0.0  0.0   2604  1916 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      134065  0.0  0.0   2604  1752 ?        Ds   13:04   0:00 /usr/lib/ssh/sftp-server
>> root      134073  0.0  0.0      0     0 ?        D    13:06   0:00 [kworker/u64:130+flush-btrfs-1]
>> hasi      134081  0.0  0.0   2604  1916 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      134091  0.0  0.0   2604  1688 ?        Ds   13:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      134098  0.0  0.0   2604  1704 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      134106  0.0  0.0   2604  1864 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      134113  0.0  0.0   2604  1776 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      134122  0.0  0.0   2604  1728 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      134129  0.0  0.0   2604  2000 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> root      134132  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:131+flush-btrfs-1]
>> hasi      134142  0.0  0.0   2604  1700 ?        Ds   13:07   0:00 /usr/lib/ssh/sftp-server
>> root      134147  0.0  0.0      0     0 ?        D    13:07   0:00 [kworker/u64:135+flush-btrfs-1]
>> hasi      134154  0.0  0.0   2604  1972 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      134162  0.0  0.0   2604  1932 ?        Ds   13:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      134180  0.0  0.0   2604  1772 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134187  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134199  0.0  0.0   2604  1872 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134202  0.0  0.0   2604  1732 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134209  0.0  0.0   2604  1716 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134216  0.0  0.0   2604  1864 ?        Ds   13:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      134230  0.0  0.0   2604  1804 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      134240  0.0  0.0   2604  1704 ?        Ds   13:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      134247  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134255  0.0  0.0   2604  1916 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134262  0.0  0.0   2604  1904 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134273  0.0  0.0   2604  1704 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134282  0.0  0.0   2604  1872 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134289  0.0  0.0   2604  1728 ?        Ds   13:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      134300  0.0  0.0   2604  1804 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      134307  0.0  0.0   2604  1944 ?        Ds   13:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      134318  0.0  0.0   2604  1732 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      134325  0.0  0.0   2604  1596 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      134338  0.0  0.0   2604  1688 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      134340  0.0  0.0   2604  2016 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      134356  0.0  0.0   2604  1872 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      134363  0.0  0.0   2604  1916 ?        Ds   13:14   0:00 /usr/lib/ssh/sftp-server
>> root      134367  0.0  0.0      0     0 ?        D    13:15   0:00 [kworker/u64:136+flush-btrfs-1]
>> hasi      134381  0.0  0.0   2604  1732 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      134391  0.0  0.0   2604  1772 ?        Ds   13:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      134398  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134406  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134413  0.0  0.0   2604  1688 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134422  0.0  0.0   2604  1916 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134431  0.0  0.0   2604  1704 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134438  0.0  0.0   2604  1752 ?        Ds   13:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      134449  0.0  0.0   2604  1704 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      134461  0.0  0.0   2604  1900 ?        Ds   13:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      134486  0.0  0.0   2604  1872 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> root      134487  0.0  0.0      0     0 ?        D    13:19   0:00 [kworker/u64:138+flush-btrfs-1]
>> hasi      134494  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      134502  0.0  0.0   2604  1916 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      134509  0.0  0.0   2604  1864 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      134516  0.0  0.0   2604  1752 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      134523  0.0  0.0   2604  1688 ?        Ds   13:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      134537  0.0  0.0   2604  1772 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      134547  0.0  0.0   2604  1944 ?        Ds   13:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      134554  0.0  0.0   2604  1732 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134562  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134569  0.0  0.0   2604  1596 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134578  0.0  0.0   2604  1772 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134587  0.0  0.0   2604  1916 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134594  0.0  0.0   2604  1864 ?        Ds   13:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      134605  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      134612  0.0  0.0   2604  1716 ?        Ds   13:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      134623  0.0  0.0   2604  1864 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      134630  0.0  0.0   2604  1716 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      134638  0.0  0.0   2604  1732 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      134645  0.0  0.0   2604  1900 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      134653  0.0  0.0   2604  1804 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      134660  0.0  0.0   2604  1688 ?        Ds   13:24   0:00 /usr/lib/ssh/sftp-server
>> root      134675  0.0  0.0      0     0 ?        D    13:26   0:00 [kworker/u64:139+flush-btrfs-1]
>> hasi      134683  0.0  0.0   2604  1732 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      134693  0.0  0.0   2604  1700 ?        Ds   13:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      134700  0.0  0.0   2604  1716 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      134708  0.0  0.0   2604  1688 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      134715  0.0  0.0   2604  1864 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      134724  0.0  0.0   2604  1596 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      134733  0.0  0.0   2604  1776 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      134740  0.0  0.0   2604  1728 ?        Ds   13:27   0:00 /usr/lib/ssh/sftp-server
>> root      134745  0.0  0.0      0     0 ?        D    13:28   0:00 [kworker/u64:142+flush-btrfs-1]
>> hasi      134752  0.0  0.0   2604  1688 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      134759  0.0  0.0   2604  1904 ?        Ds   13:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      134771  0.0  0.0   2604  1932 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      134784  0.0  0.0   2604  1772 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      134792  0.0  0.0   2604  1916 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      134799  0.0  0.0   2604  1700 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      134806  0.0  0.0   2604  1716 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      134813  0.0  0.0   2604  1596 ?        Ds   13:29   0:00 /usr/lib/ssh/sftp-server
>> root      134814  0.0  0.0      0     0 ?        D    13:29   0:00 [kworker/u64:143+flush-btrfs-1]
>> hasi      134828  0.0  0.0   2604  1776 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      134838  0.0  0.0   2604  1916 ?        Ds   13:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      134848  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      134856  0.0  0.0   2604  1804 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      134863  0.0  0.0   2604  1872 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      134872  0.0  0.0   2604  1716 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      134881  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      134888  0.0  0.0   2604  1916 ?        Ds   13:32   0:00 /usr/lib/ssh/sftp-server
>> root      134893  0.0  0.0      0     0 ?        D    13:33   0:00 [kworker/u64:145+flush-btrfs-1]
>> hasi      134900  0.0  0.0   2604  1872 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      134911  0.0  0.0   2604  1700 ?        Ds   13:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      134923  0.0  0.0   2604  1728 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134930  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134938  0.0  0.0   2604  1880 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134946  0.0  0.0   2604  1732 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134953  0.0  0.0   2604  1944 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134960  0.0  0.0   2604  1900 ?        Ds   13:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      134981  0.0  0.0   2604  1716 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      134991  0.0  0.0   2604  1804 ?        Ds   13:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      134999  0.0  0.0   2604  1864 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      135007  0.0  0.0   2604  1944 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      135014  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      135023  0.0  0.0   2604  1716 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      135032  0.0  0.0   2604  1596 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      135039  0.0  0.0   2604  1700 ?        Ds   13:37   0:00 /usr/lib/ssh/sftp-server
>> root      135065  0.0  0.0      0     0 ?        D    13:38   0:00 [kworker/u64:146+flush-btrfs-1]
>> hasi      135072  0.0  0.0   2604  1596 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      135085  0.0  0.0   2604  1916 ?        Ds   13:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      135097  0.0  0.0   2604  1972 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      135105  0.0  0.0   2604  1716 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      135113  0.0  0.0   2604  1700 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      135120  0.0  0.0   2604  1900 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      135133  0.0  0.0   4308  2940 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      135140  0.0  0.0   2604  1960 ?        Ds   13:39   0:00 /usr/lib/ssh/sftp-server
>> root      135161  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:148+flush-btrfs-1]
>> hasi      135169  0.0  0.0   2604  1728 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
>> root      135170  0.0  0.0      0     0 ?        D    13:41   0:00 [kworker/u64:149+flush-btrfs-1]
>> hasi      135183  0.0  0.0   2604  1732 ?        Ds   13:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      135191  0.0  0.0   2604  1704 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      135199  0.0  0.0   2604  1804 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      135206  0.0  0.0   2604  1700 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      135215  0.0  0.0   2604  1752 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      135224  0.0  0.0   2604  1864 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      135231  0.0  0.0   2604  1688 ?        Ds   13:42   0:00 /usr/lib/ssh/sftp-server
>> root      135234  0.0  0.0      0     0 ?        D    13:43   0:00 [kworker/u64:150+flush-btrfs-1]
>> hasi      135241  0.0  0.0   2604  1688 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      135250  0.0  0.0   2604  1732 ?        Ds   13:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      135258  0.0  0.0   2604  1776 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135265  0.0  0.0   2604  1716 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135273  0.0  0.0   2604  1596 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135280  0.0  0.0   2604  1732 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135287  0.0  0.0   2604  1916 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135294  0.0  0.0   2604  1864 ?        Ds   13:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      135308  0.0  0.0   2604  1732 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      135316  0.0  0.0   2604  1596 ?        Ds   13:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      135323  0.0  0.0   2604  1716 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135337  0.0  0.0   2604  1932 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135345  0.0  0.0   2604  1944 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135355  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135364  0.0  0.0   2604  1704 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135371  0.0  0.0   2604  1864 ?        Ds   13:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      135385  0.0  0.0   2604  1704 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      135395  0.0  0.0   2604  1752 ?        Ds   13:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      135403  0.0  0.0   2604  1752 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135410  0.0  0.0   2604  1864 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135418  0.0  0.0   2604  1700 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135425  0.0  0.0   2604  1944 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135432  0.0  0.0   2676  1844 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135439  0.0  0.0   2604  1776 ?        Ds   13:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      135456  0.0  0.0   2604  1816 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      135468  0.0  0.0   2604  1716 ?        Ds   13:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      135475  0.0  0.0   2604  1916 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135483  0.0  0.0   2604  1776 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135490  0.0  0.0   2604  2072 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135500  0.0  0.0   2604  1872 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135509  0.0  0.0   2604  1864 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135517  0.0  0.0   2604  1716 ?        Ds   13:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      135527  0.0  0.0   2604  1704 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      135542  0.0  0.0   2604  1596 ?        Ds   13:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      135551  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135559  0.0  0.0   2604  1732 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135567  0.0  0.0   2604  1916 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135574  0.0  0.0   2604  1716 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135582  0.0  0.0   2604  1728 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135589  0.0  0.0   2604  1772 ?        Ds   13:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      135602  0.0  0.0   2604  1700 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      135610  0.0  0.0   2604  1992 ?        Ds   13:56   0:00 /usr/lib/ssh/sftp-server
>> root      135614  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:151+flush-btrfs-1]
>> hasi      135619  0.0  0.0   2604  1688 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      135627  0.0  0.0   2604  2000 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      135637  0.0  0.0   2604  1596 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      135647  0.0  0.0   2604  1992 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> root      135652  0.0  0.0      0     0 ?        D    13:57   0:00 [kworker/u64:152+flush-btrfs-1]
>> hasi      135659  0.0  0.0   2604  2016 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      135667  0.0  0.0   2604  1900 ?        Ds   13:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      135679  0.0  0.0   2604  1972 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      135688  0.0  0.0   2604  1916 ?        Ds   13:58   0:00 /usr/lib/ssh/sftp-server
>> root      135689  0.0  0.0      0     0 ?        D    13:59   0:00 [kworker/u64:153+flush-btrfs-1]
>> hasi      135697  0.0  0.0   2604  1992 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      135707  0.0  0.0   2604  1804 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      135716  0.0  0.0   2604  1716 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      135723  0.0  0.0   2604  1880 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      135731  0.0  0.0   2604  1772 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      135738  0.0  0.0   2604  1916 ?        Ds   13:59   0:00 /usr/lib/ssh/sftp-server
>> root      135754  0.0  0.0      0     0 ?        D    14:01   0:00 [kworker/u64:154+flush-btrfs-1]
>> hasi      135763  0.0  0.0   2604  1944 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      135771  0.0  0.0   2604  1716 ?        Ds   14:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      135778  0.0  0.0   2604  1776 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135789  0.0  0.0   2604  1872 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135796  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135805  0.0  0.0   2604  1716 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135814  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135821  0.0  0.0   2604  1804 ?        Ds   14:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      135831  0.0  0.0   2604  1944 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      135846  0.0  0.0   2604  1960 ?        Ds   14:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      135857  0.0  0.0   2604  1704 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135864  0.0  0.0   2604  1904 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135878  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135885  0.0  0.0   2604  1716 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135892  0.0  0.0   2604  1804 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135899  0.0  0.0   2604  1596 ?        Ds   14:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      135910  0.0  0.0   2604  1688 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      135918  0.0  0.0   2604  1728 ?        Ds   14:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      135926  0.0  0.0   2604  1732 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      135935  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> root      135936  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:155+flush-btrfs-1]
>> hasi      135943  0.0  0.0   2604  1772 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      135952  0.0  0.0   2604  1728 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      135961  0.0  0.0   2604  1864 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      135968  0.0  0.0   2604  2044 ?        Ds   14:07   0:00 /usr/lib/ssh/sftp-server
>> root      135969  0.0  0.0      0     0 ?        D    14:07   0:00 [kworker/u64:156+flush-btrfs-1]
>> hasi      136004  0.0  0.0   2604  1908 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
>> root      136011  0.0  0.0      0     0 ?        D    14:08   0:00 [kworker/u64:157+flush-btrfs-1]
>> hasi      136019  0.0  0.0   2604  1732 ?        Ds   14:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      136027  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136035  0.0  0.0   2604  1700 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136043  0.0  0.0   2604  1864 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136050  0.0  0.0   2604  1772 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136057  0.0  0.0   2604  1776 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136064  0.0  0.0   2604  1944 ?        Ds   14:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      136075  0.0  0.0   2604  1772 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      136084  0.0  0.0   2604  1944 ?        Ds   14:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      136092  0.0  0.0   2604  1700 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136100  0.0  0.0   2604  1944 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136107  0.0  0.0   2604  1752 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136116  0.0  0.0   2604  1716 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136125  0.0  0.0   2604  1804 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136132  0.0  0.0   2604  1772 ?        Ds   14:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      136143  0.0  0.0   2604  1940 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      136153  0.0  0.0   2604  1864 ?        Ds   14:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      136162  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136170  0.0  0.0   2604  1716 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136178  0.0  0.0   2604  2000 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136187  0.0  0.0   2604  1688 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136194  0.0  0.0   2604  1972 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136207  0.0  0.0   2604  1772 ?        Ds   14:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      136223  0.0  0.0   2604  1916 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      136231  0.0  0.0   2604  1776 ?        Ds   14:16   0:00 /usr/lib/ssh/sftp-server
>> root      136232  0.0  0.0      0     0 ?        D    14:17   0:00 [kworker/u64:159+flush-btrfs-1]
>> hasi      136239  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136247  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136254  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136263  0.0  0.0   2604  1716 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136272  0.0  0.0   2604  1732 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136279  0.0  0.0   2604  1596 ?        Ds   14:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      136290  0.0  0.0   2604  1596 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      136299  0.0  0.0   2604  1716 ?        Ds   14:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      136312  0.0  0.0   2604  1860 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136323  0.0  0.0   2604  1716 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136331  0.0  0.0   2604  1700 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136338  0.0  0.0   2604  1900 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136352  0.0  0.0   2604  1704 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136359  0.0  0.0   2604  1728 ?        Ds   14:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      136375  0.0  0.0   2604  1716 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      136384  0.0  0.0   2604  1752 ?        Ds   14:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      136391  0.0  0.0   2604  1772 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      136399  0.0  0.0   2604  1700 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      136406  0.0  0.0   2604  1732 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      136415  0.0  0.0   2604  1716 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      136424  0.0  0.0   2604  1804 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      136431  0.0  0.0   2604  1728 ?        Ds   14:22   0:00 /usr/lib/ssh/sftp-server
>> root      136434  0.0  0.0      0     0 ?        D    14:23   0:00 [kworker/u64:160+flush-btrfs-1]
>> hasi      136442  0.0  0.0   2604  1704 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      136450  0.0  0.0   2604  1716 ?        Ds   14:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      136458  0.0  0.0   2604  1872 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136465  0.0  0.0   2604  1752 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136473  0.0  0.0   2604  1864 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136480  0.0  0.0   2604  1700 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136487  0.0  0.0   2604  1908 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136495  0.0  0.0   2604  1944 ?        Ds   14:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      136507  0.0  0.0   2604  1688 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      136516  0.0  0.0   2604  1908 ?        Ds   14:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      136526  0.0  0.0   2604  2044 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136537  0.0  0.0   2604  1880 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136545  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136554  0.0  0.0   2604  1960 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136569  0.0  0.0   2604  1944 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136577  0.0  0.0   2604  1716 ?        Ds   14:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      136587  0.0  0.0   2604  1864 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      136596  0.0  0.0   2604  1700 ?        Ds   14:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      136604  0.0  0.0   2604  1864 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      136611  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      136619  0.0  0.0   2604  1728 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      136626  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      136633  0.0  0.0   2604  1716 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      136640  0.0  0.0   2604  1904 ?        Ds   14:29   0:00 /usr/lib/ssh/sftp-server
>> root      136644  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:161+flush-btrfs-1]
>> root      136646  0.0  0.0      0     0 ?        D    14:29   0:00 [kworker/u64:162+flush-btrfs-1]
>> hasi      136657  0.0  0.0   2604  2016 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      136668  0.0  0.0   2604  1704 ?        Ds   14:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      136678  0.0  0.0   2604  1712 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136689  0.0  0.0   2604  1804 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136692  0.0  0.0   2604  1972 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136703  0.0  0.0   2604  1752 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136712  0.0  0.0   2604  1688 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136719  0.0  0.0   2604  1772 ?        Ds   14:32   0:00 /usr/lib/ssh/sftp-server
>> hasi      136729  0.0  0.0   2604  1776 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      136740  0.0  0.0   2604  1972 ?        Ds   14:33   0:00 /usr/lib/ssh/sftp-server
>> hasi      136751  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136758  0.0  0.0   2604  1804 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136766  0.0  0.0   2604  1944 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136773  0.0  0.0   2604  1872 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136780  0.0  0.0   2604  1860 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136787  0.0  0.0   2604  1688 ?        Ds   14:34   0:00 /usr/lib/ssh/sftp-server
>> hasi      136799  0.0  0.0   2604  1772 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      136807  0.0  0.0   2604  1596 ?        Ds   14:36   0:00 /usr/lib/ssh/sftp-server
>> hasi      136815  0.0  0.0   2604  1972 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      136827  0.0  0.0   2604  1900 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      136836  0.0  0.0   2604  1916 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      136845  0.0  0.0   2604  1716 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      136855  0.0  0.0   2604  1776 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> hasi      136862  0.0  0.0   2604  1772 ?        Ds   14:37   0:00 /usr/lib/ssh/sftp-server
>> root      136866  0.0  0.0      0     0 ?        D    14:38   0:00 [kworker/u64:163+flush-btrfs-1]
>> hasi      136874  0.0  0.0   2604  1900 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      136906  0.0  0.0   2604  1776 ?        Ds   14:38   0:00 /usr/lib/ssh/sftp-server
>> hasi      136914  0.0  0.0   2604  1732 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> root      136915  0.0  0.0      0     0 ?        D    14:39   0:00 [kworker/u64:164+flush-btrfs-1]
>> hasi      136922  0.0  0.0   2604  1752 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      136930  0.0  0.0   2604  1688 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      136941  0.0  0.0   2604  1596 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      136948  0.0  0.0   2604  1916 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      136955  0.0  0.0   2604  1700 ?        Ds   14:39   0:00 /usr/lib/ssh/sftp-server
>> hasi      136975  0.0  0.0   2604  1728 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      136984  0.0  0.0   2604  1688 ?        Ds   14:41   0:00 /usr/lib/ssh/sftp-server
>> hasi      136993  0.0  0.0   2604  1700 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> root      136994  0.0  0.0      0     0 ?        D    14:42   0:00 [kworker/u64:165+flush-btrfs-1]
>> hasi      137001  0.0  0.0   2604  1872 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      137008  0.0  0.0   2604  1972 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      137019  0.0  0.0   2604  1716 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      137028  0.0  0.0   2604  1864 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      137035  0.0  0.0   2604  1704 ?        Ds   14:42   0:00 /usr/lib/ssh/sftp-server
>> hasi      137045  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      137053  0.0  0.0   2604  1732 ?        Ds   14:43   0:00 /usr/lib/ssh/sftp-server
>> hasi      137061  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      137071  0.0  0.0   2604  1688 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      137079  0.0  0.0   2604  1900 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> root      137081  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:166+flush-btrfs-1]
>> root      137082  0.0  0.0      0     0 ?        D    14:44   0:00 [kworker/u64:167+flush-btrfs-1]
>> hasi      137089  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      137096  0.0  0.0   2604  1716 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      137103  0.0  0.0   2604  1700 ?        Ds   14:44   0:00 /usr/lib/ssh/sftp-server
>> hasi      137115  0.0  0.0   2604  1772 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      137123  0.0  0.0   2604  1904 ?        Ds   14:46   0:00 /usr/lib/ssh/sftp-server
>> hasi      137133  0.0  0.0   2604  1772 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137140  0.0  0.0   2604  1596 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137147  0.0  0.0   2604  1732 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137156  0.0  0.0   2604  1704 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137166  0.0  0.0   2604  1700 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137175  0.0  0.0   2604  1776 ?        Ds   14:47   0:00 /usr/lib/ssh/sftp-server
>> hasi      137188  0.0  0.0   2604  1804 ?        Ds   14:48   0:00 /usr/lib/ssh/sftp-server
>> hasi      137200  0.0  0.0   2604  2044 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137207  0.0  0.0   2604  1944 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137214  0.0  0.0   2604  1732 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137222  0.0  0.0   2604  1872 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137229  0.0  0.0   2604  1960 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137239  0.0  0.0   2604  1716 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137246  0.0  0.0   2604  1688 ?        Ds   14:49   0:00 /usr/lib/ssh/sftp-server
>> hasi      137257  0.0  0.0   2604  1704 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
>> root      137258  0.0  0.0      0     0 ?        D    14:51   0:00 [kworker/u64:168+flush-btrfs-1]
>> hasi      137266  0.0  0.0   2604  1864 ?        Ds   14:51   0:00 /usr/lib/ssh/sftp-server
>> hasi      137274  0.0  0.0   2604  1700 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137281  0.0  0.0   2604  1688 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137288  0.0  0.0   2604  1716 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137297  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137307  0.0  0.0   2604  1804 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137314  0.0  0.0   2604  1752 ?        Ds   14:52   0:00 /usr/lib/ssh/sftp-server
>> hasi      137325  0.0  0.0   2604  1716 ?        Ds   14:53   0:00 /usr/lib/ssh/sftp-server
>> hasi      137335  0.0  0.0   2604  1900 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137344  0.0  0.0   2604  1772 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137351  0.0  0.0   2604  1944 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137359  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137366  0.0  0.0   2604  1704 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137373  0.0  0.0   2604  1960 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137381  0.0  0.0   2604  1872 ?        Ds   14:54   0:00 /usr/lib/ssh/sftp-server
>> hasi      137398  0.0  0.0   2604  1732 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      137407  0.0  0.0   2604  1716 ?        Ds   14:56   0:00 /usr/lib/ssh/sftp-server
>> hasi      137415  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137422  0.0  0.0   2604  1772 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137429  0.0  0.0   2604  1704 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137438  0.0  0.0   2604  1776 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137448  0.0  0.0   2604  1700 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137455  0.0  0.0   2604  1732 ?        Ds   14:57   0:00 /usr/lib/ssh/sftp-server
>> hasi      137465  0.0  0.0   2604  1752 ?        Ds   14:58   0:00 /usr/lib/ssh/sftp-server
>> hasi      137479  0.0  0.0   2604  1752 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137486  0.0  0.0   2604  1804 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137493  0.0  0.0   2604  1864 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137501  0.0  0.0   2604  1916 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137508  0.0  0.0   2604  1772 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137515  0.0  0.0   2604  2072 ?        Ds   14:59   0:00 /usr/lib/ssh/sftp-server
>> hasi      137530  0.0  0.0   2604  1716 ?        Ds   15:00   0:00 /usr/lib/ssh/sftp-server
>> root      137531  0.0  0.0      0     0 ?        D    15:00   0:00 [kworker/u64:171+flush-btrfs-1]
>> hasi      137547  0.0  0.0   2604  1864 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      137557  0.0  0.0   2604  1804 ?        Ds   15:01   0:00 /usr/lib/ssh/sftp-server
>> hasi      137566  0.0  0.0   2604  1716 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      137573  0.0  0.0   2604  1772 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      137580  0.0  0.0   2604  1732 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      137589  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      137598  0.0  0.0   2604  1596 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> hasi      137605  0.0  0.0   2604  1700 ?        Ds   15:02   0:00 /usr/lib/ssh/sftp-server
>> root      137609  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:172+flush-btrfs-1]
>> root      137611  0.0  0.0      0     0 ?        D    15:03   0:00 [kworker/u64:173+flush-btrfs-1]
>> hasi      137625  0.0  0.0   2604  1816 ?        Ds   15:03   0:00 /usr/lib/ssh/sftp-server
>> hasi      137640  0.0  0.0   2604  1688 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137647  0.0  0.0   2604  2044 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137656  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137664  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137671  0.0  0.0   2604  1944 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137678  0.0  0.0   2604  1916 ?        Ds   15:04   0:00 /usr/lib/ssh/sftp-server
>> hasi      137688  0.0  0.0   2604  1772 ?        Ds   15:05   0:00 /usr/lib/ssh/sftp-server
>> hasi      137699  0.0  0.0   2604  1704 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      137709  0.0  0.0   2604  1728 ?        Ds   15:06   0:00 /usr/lib/ssh/sftp-server
>> hasi      137717  0.0  0.0   2604  1776 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137724  0.0  0.0   2604  2016 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137732  0.0  0.0   2604  1752 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137739  0.0  0.0   2604  1916 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137748  0.0  0.0   2604  1864 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137755  0.0  0.0   2604  1704 ?        Ds   15:07   0:00 /usr/lib/ssh/sftp-server
>> hasi      137766  0.0  0.0   2604  1688 ?        Ds   15:08   0:00 /usr/lib/ssh/sftp-server
>> hasi      137778  0.0  0.0   2604  1704 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137785  0.0  0.0   2604  1772 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137792  0.0  0.0   2604  1700 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137800  0.0  0.0   2604  1596 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137807  0.0  0.0   2604  1804 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137816  0.0  0.0   2604  2044 ?        Ds   15:09   0:00 /usr/lib/ssh/sftp-server
>> hasi      137826  0.0  0.0   2604  1776 ?        Ds   15:10   0:00 /usr/lib/ssh/sftp-server
>> hasi      137844  0.0  0.0   2604  1816 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      137856  0.0  0.0   2604  1844 ?        Ds   15:11   0:00 /usr/lib/ssh/sftp-server
>> hasi      137865  0.0  0.0   2604  1732 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137872  0.0  0.0   2604  1772 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137879  0.0  0.0   2604  1752 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137886  0.0  0.0   2604  1700 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137895  0.0  0.0   2604  1596 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137902  0.0  0.0   2604  1944 ?        Ds   15:12   0:00 /usr/lib/ssh/sftp-server
>> hasi      137914  0.0  0.0   2604  1752 ?        Ds   15:13   0:00 /usr/lib/ssh/sftp-server
>> hasi      137925  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137932  0.0  0.0   2604  1700 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137939  0.0  0.0   2604  1704 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137947  0.0  0.0   2604  1960 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137956  0.0  0.0   2604  1732 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137963  0.0  0.0   2604  1716 ?        Ds   15:14   0:00 /usr/lib/ssh/sftp-server
>> hasi      137977  0.0  0.0   2604  1716 ?        Ds   15:15   0:00 /usr/lib/ssh/sftp-server
>> hasi      137992  0.0  0.0   2604  1716 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      138002  0.0  0.0   2604  1872 ?        Ds   15:16   0:00 /usr/lib/ssh/sftp-server
>> hasi      138011  0.0  0.0   2604  1716 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138018  0.0  0.0   2604  1872 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138025  0.0  0.0   2604  1804 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138046  0.0  0.0   2604  1880 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138057  0.0  0.0   2604  1988 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138065  0.0  0.0   2604  1844 ?        Ds   15:17   0:00 /usr/lib/ssh/sftp-server
>> hasi      138088  0.0  0.0   2604  1864 ?        Ds   15:18   0:00 /usr/lib/ssh/sftp-server
>> hasi      138099  0.0  0.0   2604  1752 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138106  0.0  0.0   2604  1916 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138113  0.0  0.0   2604  1704 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138121  0.0  0.0   2604  1872 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138132  0.0  0.0   2604  1776 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138139  0.0  0.0   2604  1688 ?        Ds   15:19   0:00 /usr/lib/ssh/sftp-server
>> hasi      138147  0.0  0.0   2604  1732 ?        Ds   15:20   0:00 /usr/lib/ssh/sftp-server
>> hasi      138162  0.0  0.0   2604  1880 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      138173  0.0  0.0   2604  1944 ?        Ds   15:21   0:00 /usr/lib/ssh/sftp-server
>> hasi      138182  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138189  0.0  0.0   2604  1716 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138196  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138203  0.0  0.0   2604  1776 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138212  0.0  0.0   2604  1872 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138219  0.0  0.0   2604  1688 ?        Ds   15:22   0:00 /usr/lib/ssh/sftp-server
>> hasi      138229  0.0  0.0   2604  1716 ?        Ds   15:23   0:00 /usr/lib/ssh/sftp-server
>> hasi      138243  0.0  0.0   2604  1944 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138250  0.0  0.0   2604  1728 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138257  0.0  0.0   2604  1900 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138267  0.0  0.0   2604  1904 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138276  0.0  0.0   2604  1688 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138283  0.0  0.0   2604  1700 ?        Ds   15:24   0:00 /usr/lib/ssh/sftp-server
>> hasi      138293  0.0  0.0   2604  1816 ?        Ds   15:25   0:00 /usr/lib/ssh/sftp-server
>> hasi      138305  0.0  0.0   2604  1596 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      138315  0.0  0.0   2604  1688 ?        Ds   15:26   0:00 /usr/lib/ssh/sftp-server
>> hasi      138326  0.0  0.0   2604  1728 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138333  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138340  0.0  0.0   2604  1716 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138347  0.0  0.0   2604  1596 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138356  0.0  0.0   2604  1776 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138363  0.0  0.0   2604  1688 ?        Ds   15:27   0:00 /usr/lib/ssh/sftp-server
>> hasi      138373  0.0  0.0   2604  1716 ?        Ds   15:28   0:00 /usr/lib/ssh/sftp-server
>> hasi      138384  0.0  0.0   2604  1688 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138391  0.0  0.0   2604  1596 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138398  0.0  0.0   2604  1728 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138406  0.0  0.0   2604  1716 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138413  0.0  0.0   2604  1872 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138423  0.0  0.0   2604  1916 ?        Ds   15:29   0:00 /usr/lib/ssh/sftp-server
>> hasi      138433  0.0  0.0   2604  1804 ?        Ds   15:30   0:00 /usr/lib/ssh/sftp-server
>> hasi      138455  0.0  0.0   2604  1804 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
>> hasi      138470  0.0  0.0   2604  2072 ?        Ds   15:31   0:00 /usr/lib/ssh/sftp-server
>> root      138974  0.0  0.0   3936  2080 pts/2    S+   16:32   0:00 grep         D
>> [root@coldnas ~]# uname -a
>> Linux coldnas 6.10.5-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 x86_64 GNU/Linux
> 
> 
>>> Am 08.08.2024 um 16:23 schrieb John Stoffel <john@stoffel.org>:
>>> 
>>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>>> 
>>>> Hi,
>>>>> On 7. Aug 2024, at 23:05, John Stoffel <john@stoffel.org> wrote:
>>>>> 
>>>>>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
>>>>> 
>>>>> 
>>>>> 
>>>>>> i had some more time at hand and managent to compile 5.15.164. The
>>>>>> issue is the same. After around 1h30m of work it hangs.  I’ll try to
>>>>>> reproduce this on a newer supported kernel if I can.
>>>>> 
>>>>> Supported by who?   NixOS?  Why don't you just install linux kernel
>>>>> 6.6.x and see of the problem is still there?  5.15.x is ancient and
>>>>> un-supported upstream now.  
>>> 
>>>> I did just that. However, 5.15 “un-supported” by upstream is
>>>> confusing me. It’s an official LTS kernel with an EOL of December
>>>> 2026.
>>> 
>>> To quote the kernel.org:
>>> 
>>> Longterm
>>> 
>>> There are usually several "longterm maintenance" kernel releases
>>> provided for the purposes of backporting bugfixes for older kernel
>>> trees. Only important bugfixes are applied to such kernels and they
>>> don't usually see very frequent releases, especially for older trees.
>>> 
>>> So when we run into people having problems with LTS kernels, the first
>>> thing we ask is for them to run the most recent kernels, because
>>> that's where the bug fixing happens.  
>>> 
>>> In any case, there have been some bugs in recent RAID5/RAID6 setups,
>>> so going to a recent kernel will help track these down. 
>>> 
>>> 
>>>> Also, I’d like to note that NixOS kernels tend to be very close to
>>>> upstream. The only patches that I can see are involved here are
>>>> those that patch out some hard coded references to user space paths:
>>> 
>>> Not in this case, kernel 5.15 is ancient, you should be running 6.9.x
>>> or even newer for debugging issues like this. 
>>> 
>>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/linux-kernels.nix#L173
>>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/request-key-helper.patch
>>>> https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel/bridge-stp-helper.patch
>>> 
>>>> Kernel is now:
>>> 
>>>> Linux barbrady08 6.10.3 #1-NixOS SMP PREEMPT_DYNAMIC Sat Aug  3 07:01:09 UTC 2024 x86_64 GNU/Linux
>>> 
>>> 
>>>> The issue is still there on 6.10.3 and now looks like shown below.
>>> 
>>> Great!  Thanks for doing this change, this will let the developers
>>> help you more easily. 
>>> 
>>>> I’m aware that this is output that shows symptoms and not
>>>> (necessarily) the cause. I’m currently a bit out of ideas where to
>>>> look for more information and would appreciate any pointers. My
>>>> suspicion is an interaction problem triggered by the use of NVMe in
>>>> combination with other systems (xfs, dm-crypt and raid are the ones
>>>> I’m aware of playign a role).
>>> 
>>>> The use of NVMe itself likely isn’t the issue (we’ve been using NVMe
>>>> on similar hosts and also in combination with dm-crypt with this
>>>> kernel for a while now) and I could imagine that it triggers a race
>>>> condition due to the higher performance - although the specific
>>>> performance parameters aren't *that* high. Right before the lockup I
>>>> see ~700 IOPS reading and ~2.5k IOPS writing. So we have seen NVMe
>>>> with dm-crypt but not with raid before.
>>> 
>>>> I can perform debugging on that machine as needed, but googling for
>>>> any combination of hung tasks related to nvme/xfs/crypt/raid only
>>>> ends up showing me generic performance concerns from forum, an
>>>> unrelated xfs issue mentioned by redhat and the list archive entry
>>>> from this post.
>>> 
>>> Can you try setting up some loop devices in the same type of
>>> configuration, and seeing if you can replicate the issue that way?
>>> Let's try to get the nvme stuff out of the way to see if this can be
>>> replicated more easily.  
>>> 
>>> 
>>>> [ 7497.019235] INFO: task .backy-wrapped:2706 blocked for more than 122 seconds.
>>>> [ 7497.027265]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.032173] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.040974] task:.backy-wrapped  state:D stack:0     pid:2706  tgid:2706  ppid:1      flags:0x00000002
>>>> [ 7497.040979] Call Trace:
>>>> [ 7497.040981]  <TASK>
>>>> [ 7497.040987]  __schedule+0x3fa/0x1550
>>>> [ 7497.040996]  ? xfs_iextents_copy+0xec/0x1b0 [xfs]
>>>> [ 7497.041085]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.041089]  ? xlog_copy_iovec+0x30/0x90 [xfs]
>>>> [ 7497.041168]  schedule+0x27/0xf0
>>>> [ 7497.041171]  io_schedule+0x46/0x70
>>>> [ 7497.041173]  folio_wait_bit_common+0x13f/0x340
>>>> [ 7497.041180]  ? __pfx_wake_page_function+0x10/0x10
>>>> [ 7497.041187]  folio_wait_writeback+0x2b/0x80
>>>> [ 7497.041191]  truncate_inode_partial_folio+0x5b/0x190
>>>> [ 7497.041194]  truncate_inode_pages_range+0x1de/0x400
>>>> [ 7497.041207]  evict+0x1b0/0x1d0
>>>> [ 7497.041212]  __dentry_kill+0x6e/0x170
>>>> [ 7497.041216]  dput+0xe5/0x1b0
>>>> [ 7497.041218]  do_renameat2+0x386/0x600
>>>> [ 7497.041226]  __x64_sys_rename+0x43/0x50
>>>> [ 7497.041229]  do_syscall_64+0xb7/0x200
>>>> [ 7497.041234]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>>>> [ 7497.041236] RIP: 0033:0x7f4be586f75b
>>>> [ 7497.041265] RSP: 002b:00007fffd2706538 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>> [ 7497.041267] RAX: ffffffffffffffda RBX: 00007fffd27065d0 RCX: 00007f4be586f75b
>>>> [ 7497.041269] RDX: 0000000000000000 RSI: 00007f4bd6f73e50 RDI: 00007f4bd6f732d0
>>>> [ 7497.041270] RBP: 00007fffd2706580 R08: 00000000ffffffff R09: 0000000000000000
>>>> [ 7497.041271] R10: 00007fffd27067b0 R11: 0000000000000246 R12: 00000000ffffff9c
>>>> [ 7497.041273] R13: 00000000ffffff9c R14: 0000000037fb4ab0 R15: 00007f4be5814810
>>>> [ 7497.041277]  </TASK>
>>>> [ 7497.041281] INFO: task kworker/u131:1:12780 blocked for more than 122 seconds.
>>>> [ 7497.049410]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.054317] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.063124] task:kworker/u131:1  state:D stack:0     pid:12780 tgid:12780 ppid:2      flags:0x00004000
>>>> [ 7497.063131] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.063140] Call Trace:
>>>> [ 7497.063141]  <TASK>
>>>> [ 7497.063145]  __schedule+0x3fa/0x1550
>>>> [ 7497.063154]  schedule+0x27/0xf0
>>>> [ 7497.063156]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.063160]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.063168]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.063175]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.063182]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.063184]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.063190]  md_handle_request+0x153/0x270
>>>> [ 7497.063194]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.063198]  __submit_bio+0x190/0x240
>>>> [ 7497.063203]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.063205]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.063207]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.063210]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.063214]  process_one_work+0x18f/0x3b0
>>>> [ 7497.063219]  worker_thread+0x233/0x340
>>>> [ 7497.063222]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.063225]  kthread+0xcd/0x100
>>>> [ 7497.063228]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.063230]  ret_from_fork+0x31/0x50
>>>> [ 7497.063234]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.063236]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.063243]  </TASK>
>>>> [ 7497.063246] INFO: task kworker/u131:0:17487 blocked for more than 122 seconds.
>>>> [ 7497.071367]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.076269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.085073] task:kworker/u131:0  state:D stack:0     pid:17487 tgid:17487 ppid:2      flags:0x00004000
>>>> [ 7497.085081] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.085086] Call Trace:
>>>> [ 7497.085087]  <TASK>
>>>> [ 7497.085089]  __schedule+0x3fa/0x1550
>>>> [ 7497.085094]  schedule+0x27/0xf0
>>>> [ 7497.085096]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.085098]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.085102]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.085108]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.085114]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.085116]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.085120]  md_handle_request+0x153/0x270
>>>> [ 7497.085122]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.085125]  __submit_bio+0x190/0x240
>>>> [ 7497.085128]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.085131]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.085133]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.085135]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.085138]  process_one_work+0x18f/0x3b0
>>>> [ 7497.085142]  worker_thread+0x233/0x340
>>>> [ 7497.085145]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.085148]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.085150]  kthread+0xcd/0x100
>>>> [ 7497.085152]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.085155]  ret_from_fork+0x31/0x50
>>>> [ 7497.085157]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.085159]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.085164]  </TASK>
>>>> [ 7497.085165] INFO: task kworker/u131:2:18973 blocked for more than 122 seconds.
>>>> [ 7497.093282]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.098185] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.106988] task:kworker/u131:2  state:D stack:0     pid:18973 tgid:18973 ppid:2      flags:0x00004000
>>>> [ 7497.106993] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.106998] Call Trace:
>>>> [ 7497.106999]  <TASK>
>>>> [ 7497.107001]  __schedule+0x3fa/0x1550
>>>> [ 7497.107006]  schedule+0x27/0xf0
>>>> [ 7497.107009]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.107012]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.107016]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.107021]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.107026]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.107028]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.107033]  md_handle_request+0x153/0x270
>>>> [ 7497.107036]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.107039]  __submit_bio+0x190/0x240
>>>> [ 7497.107042]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.107044]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.107046]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.107049]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.107052]  process_one_work+0x18f/0x3b0
>>>> [ 7497.107055]  worker_thread+0x233/0x340
>>>> [ 7497.107058]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.107060]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.107063]  kthread+0xcd/0x100
>>>> [ 7497.107065]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.107067]  ret_from_fork+0x31/0x50
>>>> [ 7497.107069]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.107071]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.107081]  </TASK>
>>>> [ 7497.107086] INFO: task rsync:23530 blocked for more than 122 seconds.
>>>> [ 7497.114327]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.119226] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.128020] task:rsync           state:D stack:0     pid:23530 tgid:23530 ppid:23520  flags:0x00000000
>>>> [ 7497.128024] Call Trace:
>>>> [ 7497.128025]  <TASK>
>>>> [ 7497.128027]  __schedule+0x3fa/0x1550
>>>> [ 7497.128030]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.128034]  schedule+0x27/0xf0
>>>> [ 7497.128036]  schedule_timeout+0x15d/0x170
>>>> [ 7497.128040]  __down_common+0x119/0x220
>>>> [ 7497.128045]  down+0x47/0x60
>>>> [ 7497.128048]  xfs_buf_lock+0x31/0xe0 [xfs]
>>>> [ 7497.128131]  xfs_buf_find_lock+0x55/0x100 [xfs]
>>>> [ 7497.128185]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
>>>> [ 7497.128236]  xfs_buf_read_map+0x62/0x2a0 [xfs]
>>>> [ 7497.128287]  ? xfs_read_agf+0x97/0x150 [xfs]
>>>> [ 7497.128357]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>>>> [ 7497.128429]  ? xfs_read_agf+0x97/0x150 [xfs]
>>>> [ 7497.128489]  xfs_read_agf+0x97/0x150 [xfs]
>>>> [ 7497.128540]  xfs_alloc_read_agf+0x5a/0x200 [xfs]
>>>> [ 7497.128589]  xfs_alloc_fix_freelist+0x345/0x660 [xfs]
>>>> [ 7497.128641]  xfs_alloc_vextent_prepare_ag+0x2d/0x120 [xfs]
>>>> [ 7497.128690]  xfs_alloc_vextent_exact_bno+0xd1/0x100 [xfs]
>>>> [ 7497.128740]  xfs_ialloc_ag_alloc+0x177/0x610 [xfs]
>>>> [ 7497.128812]  xfs_dialloc+0x219/0x7b0 [xfs]
>>>> [ 7497.128864]  ? xfs_trans_alloc_icreate+0x93/0x120 [xfs]
>>>> [ 7497.128935]  xfs_create+0x2c7/0x640 [xfs]
>>>> [ 7497.128998]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.129001]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.129003]  ? get_cached_acl+0x4c/0x90
>>>> [ 7497.129008]  xfs_generic_create+0x321/0x3a0 [xfs]
>>>> [ 7497.129061]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.129065]  path_openat+0xf82/0x1240
>>>> [ 7497.129072]  do_filp_open+0xc4/0x170
>>>> [ 7497.129084]  do_sys_openat2+0xab/0xe0
>>>> [ 7497.129090]  __x64_sys_openat+0x57/0xa0
>>>> [ 7497.129093]  do_syscall_64+0xb7/0x200
>>>> [ 7497.129096]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
>>>> [ 7497.129099] RIP: 0033:0x7f6809d2be2f
>>>> [ 7497.129121] RSP: 002b:00007ffe3d410cf0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
>>>> [ 7497.129123] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6809d2be2f
>>>> [ 7497.129124] RDX: 00000000000000c2 RSI: 00007ffe3d412fc0 RDI: 00000000ffffff9c
>>>> [ 7497.129126] RBP: 000000000003a2f8 R08: 001f1108db8eff56 R09: 00007ffe3d410f2c
>>>> [ 7497.129128] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffe3d41300b
>>>> [ 7497.129129] R13: 00007ffe3d412fc0 R14: 8421084210842109 R15: 00007f6809dc6a80
>>>> [ 7497.129133]  </TASK>
>>>> [ 7497.129146] INFO: task kworker/u131:3:23611 blocked for more than 122 seconds.
>>>> [ 7497.137277]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.142187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.150980] task:kworker/u131:3  state:D stack:0     pid:23611 tgid:23611 ppid:2      flags:0x00004000
>>>> [ 7497.150986] Workqueue: writeback wb_workfn (flush-253:4)
>>>> [ 7497.150993] Call Trace:
>>>> [ 7497.150995]  <TASK>
>>>> [ 7497.150998]  __schedule+0x3fa/0x1550
>>>> [ 7497.151007]  schedule+0x27/0xf0
>>>> [ 7497.151009]  schedule_timeout+0x15d/0x170
>>>> [ 7497.151013]  __wait_for_common+0x90/0x1c0
>>>> [ 7497.151015]  ? __pfx_schedule_timeout+0x10/0x10
>>>> [ 7497.151020]  xfs_buf_iowait+0x1c/0xc0 [xfs]
>>>> [ 7497.151094]  __xfs_buf_submit+0x132/0x1e0 [xfs]
>>>> [ 7497.151146]  xfs_buf_read_map+0x129/0x2a0 [xfs]
>>>> [ 7497.151197]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>>> [ 7497.151267]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
>>>> [ 7497.151336]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>>> [ 7497.151396]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
>>>> [ 7497.151446]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
>>>> [ 7497.151497]  xfs_btree_lookup+0xea/0x500 [xfs]
>>>> [ 7497.151546]  ? xfs_btree_increment+0x44/0x310 [xfs]
>>>> [ 7497.151596]  xfs_alloc_fixup_trees+0x66/0x4c0 [xfs]
>>>> [ 7497.151661]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>>> [ 7497.151710]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
>>>> [ 7497.151764]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
>>>> [ 7497.151813]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.151817]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
>>>> [ 7497.151884]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
>>>> [ 7497.151938]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
>>>> [ 7497.151999]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
>>>> [ 7497.152048]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
>>>> [ 7497.152105]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
>>>> [ 7497.152155]  xfs_map_blocks+0x257/0x420 [xfs]
>>>> [ 7497.152228]  iomap_writepages+0x271/0x9b0
>>>> [ 7497.152235]  xfs_vm_writepages+0x67/0x90 [xfs]
>>>> [ 7497.152287]  do_writepages+0x76/0x260
>>>> [ 7497.152294]  ? uas_submit_urbs+0x8c/0x4c0 [uas]
>>>> [ 7497.152297]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.152300]  ? psi_group_change+0x213/0x3c0
>>>> [ 7497.152305]  __writeback_single_inode+0x3d/0x350
>>>> [ 7497.152307]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.152309]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.152312]  writeback_sb_inodes+0x21c/0x4e0
>>>> [ 7497.152323]  __writeback_inodes_wb+0x4c/0xf0
>>>> [ 7497.152325]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.152328]  wb_writeback+0x193/0x310
>>>> [ 7497.152332]  wb_workfn+0x357/0x450
>>>> [ 7497.152337]  process_one_work+0x18f/0x3b0
>>>> [ 7497.152342]  worker_thread+0x233/0x340
>>>> [ 7497.152345]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.152348]  kthread+0xcd/0x100
>>>> [ 7497.152352]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.152354]  ret_from_fork+0x31/0x50
>>>> [ 7497.152358]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.152360]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.152366]  </TASK>
>>>> [ 7497.152368] INFO: task kworker/u131:4:23612 blocked for more than 123 seconds.
>>>> [ 7497.160489]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.165390] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.174190] task:kworker/u131:4  state:D stack:0     pid:23612 tgid:23612 ppid:2      flags:0x00004000
>>>> [ 7497.174194] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.174200] Call Trace:
>>>> [ 7497.174201]  <TASK>
>>>> [ 7497.174203]  __schedule+0x3fa/0x1550
>>>> [ 7497.174208]  schedule+0x27/0xf0
>>>> [ 7497.174210]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.174214]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.174219]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.174227]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.174233]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.174235]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.174242]  md_handle_request+0x153/0x270
>>>> [ 7497.174245]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.174248]  __submit_bio+0x190/0x240
>>>> [ 7497.174252]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.174255]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.174257]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.174259]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.174263]  process_one_work+0x18f/0x3b0
>>>> [ 7497.174266]  worker_thread+0x233/0x340
>>>> [ 7497.174269]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.174271]  kthread+0xcd/0x100
>>>> [ 7497.174273]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.174276]  ret_from_fork+0x31/0x50
>>>> [ 7497.174277]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.174279]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.174285]  </TASK>
>>>> [ 7497.174292] INFO: task kworker/u130:33:23645 blocked for more than 123 seconds.
>>>> [ 7497.182499]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.187400] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.196203] task:kworker/u130:33 state:D stack:0     pid:23645 tgid:23645 ppid:2      flags:0x00004000
>>>> [ 7497.196209] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
>>>> [ 7497.196281] Call Trace:
>>>> [ 7497.196282]  <TASK>
>>>> [ 7497.196285]  __schedule+0x3fa/0x1550
>>>> [ 7497.196289]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.196293]  schedule+0x27/0xf0
>>>> [ 7497.196295]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
>>>> [ 7497.196346]  ? __pfx_default_wake_function+0x10/0x10
>>>> [ 7497.196351]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
>>>> [ 7497.196400]  xlog_write+0x310/0x470 [xfs]
>>>> [ 7497.196451]  xlog_cil_push_work+0x6a5/0x880 [xfs]
>>>> [ 7497.196503]  process_one_work+0x18f/0x3b0
>>>> [ 7497.196507]  worker_thread+0x233/0x340
>>>> [ 7497.196510]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.196512]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.196515]  kthread+0xcd/0x100
>>>> [ 7497.196517]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.196519]  ret_from_fork+0x31/0x50
>>>> [ 7497.196522]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.196524]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.196529]  </TASK>
>>>> [ 7497.196531] INFO: task kworker/u131:6:23863 blocked for more than 123 seconds.
>>>> [ 7497.204648]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.209539] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.218347] task:kworker/u131:6  state:D stack:0     pid:23863 tgid:23863 ppid:2      flags:0x00004000
>>>> [ 7497.218353] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.218359] Call Trace:
>>>> [ 7497.218360]  <TASK>
>>>> [ 7497.218363]  __schedule+0x3fa/0x1550
>>>> [ 7497.218369]  schedule+0x27/0xf0
>>>> [ 7497.218371]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.218375]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.218379]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.218384]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.218390]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.218392]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.218398]  md_handle_request+0x153/0x270
>>>> [ 7497.218401]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.218405]  __submit_bio+0x190/0x240
>>>> [ 7497.218408]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.218410]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.218413]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.218415]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.218419]  process_one_work+0x18f/0x3b0
>>>> [ 7497.218423]  worker_thread+0x233/0x340
>>>> [ 7497.218426]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.218428]  kthread+0xcd/0x100
>>>> [ 7497.218430]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.218433]  ret_from_fork+0x31/0x50
>>>> [ 7497.218435]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.218437]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.218442]  </TASK>
>>>> [ 7497.218444] INFO: task kworker/u131:7:23864 blocked for more than 123 seconds.
>>>> [ 7497.226572]       Not tainted 6.10.3 #1-NixOS
>>>> [ 7497.231475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>> [ 7497.240277] task:kworker/u131:7  state:D stack:0     pid:23864 tgid:23864 ppid:2      flags:0x00004000
>>>> [ 7497.240282] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
>>>> [ 7497.240287] Call Trace:
>>>> [ 7497.240288]  <TASK>
>>>> [ 7497.240290]  __schedule+0x3fa/0x1550
>>>> [ 7497.240298]  schedule+0x27/0xf0
>>>> [ 7497.240301]  md_bitmap_startwrite+0x14f/0x1c0
>>>> [ 7497.240304]  ? __pfx_autoremove_wake_function+0x10/0x10
>>>> [ 7497.240310]  __add_stripe_bio+0x1f4/0x240 [raid456]
>>>> [ 7497.240314]  raid5_make_request+0x34d/0x1280 [raid456]
>>>> [ 7497.240320]  ? __pfx_woken_wake_function+0x10/0x10
>>>> [ 7497.240322]  ? bio_split_rw+0x193/0x260
>>>> [ 7497.240328]  md_handle_request+0x153/0x270
>>>> [ 7497.240330]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.240334]  __submit_bio+0x190/0x240
>>>> [ 7497.240338]  submit_bio_noacct_nocheck+0x19a/0x3c0
>>>> [ 7497.240340]  ? srso_alias_return_thunk+0x5/0xfbef5
>>>> [ 7497.240342]  ? submit_bio_noacct+0x46/0x5a0
>>>> [ 7497.240345]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
>>>> [ 7497.240348]  process_one_work+0x18f/0x3b0
>>>> [ 7497.240353]  worker_thread+0x233/0x340
>>>> [ 7497.240356]  ? __pfx_worker_thread+0x10/0x10
>>>> [ 7497.240358]  kthread+0xcd/0x100
>>>> [ 7497.240361]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.240364]  ret_from_fork+0x31/0x50
>>>> [ 7497.240366]  ? __pfx_kthread+0x10/0x10
>>>> [ 7497.240368]  ret_from_fork_asm+0x1a/0x30
>>>> [ 7497.240375]  </TASK>
>>>> [ 7497.240376] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> Kernel:
>>>>> 
>>>>>> Linux version 5.15.164 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Sat Jul 27 08:46:18 UTC 2024
>>>>> 
>>>>>> The config is unchanged except from the deprecated NFSD_V2_ACL and NFSD_V3 options which I had to remove. NFS is not in use on this server, though.
>>>>> 
>>>>>> Output:
>>>>> 
>>>>>> [ 4549.838672] INFO: task kworker/u64:7:432 blocked for more than 122 seconds.
>>>>>> [ 4549.846507]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.851616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.860421] task:kworker/u64:7   state:D stack:    0 pid:  432 ppid:     2 flags:0x00004000
>>>>>> [ 4549.860426] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4549.860435] Call Trace:
>>>>>> [ 4549.860437]  <TASK>
>>>>>> [ 4549.860440]  __schedule+0x373/0x1580
>>>>>> [ 4549.860446]  ? sysvec_call_function_single+0xa/0x90
>>>>>> [ 4549.860449]  ? asm_sysvec_call_function_single+0x16/0x20
>>>>>> [ 4549.860453]  schedule+0x5b/0xe0
>>>>>> [ 4549.860455]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4549.860459]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.860465]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4549.860472]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4549.860476]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>>>> [ 4549.860480]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.860484]  ? linear_map+0x44/0x90 [dm_mod]
>>>>>> [ 4549.860490]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.860492]  ? __blk_queue_split+0x516/0x580
>>>>>> [ 4549.860495]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4549.860500]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4549.860502]  __submit_bio+0x18c/0x220
>>>>>> [ 4549.860505]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.860507]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4549.860510]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4549.860512]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4549.860517]  process_one_work+0x1d3/0x360
>>>>>> [ 4549.860521]  worker_thread+0x4d/0x3b0
>>>>>> [ 4549.860523]  ? process_one_work+0x360/0x360
>>>>>> [ 4549.860525]  kthread+0x115/0x140
>>>>>> [ 4549.860528]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4549.860530]  ret_from_fork+0x1f/0x30
>>>>>> [ 4549.860535]  </TASK>
>>>>>> [ 4549.860536] INFO: task kworker/u64:23:448 blocked for more than 122 seconds.
>>>>>> [ 4549.868461]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.873555] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.882358] task:kworker/u64:23  state:D stack:    0 pid:  448 ppid:     2 flags:0x00004000
>>>>>> [ 4549.882364] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4549.882368] Call Trace:
>>>>>> [ 4549.882369]  <TASK>
>>>>>> [ 4549.882370]  __schedule+0x373/0x1580
>>>>>> [ 4549.882373]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4549.882375]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4549.882379]  schedule+0x5b/0xe0
>>>>>> [ 4549.882382]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4549.882384]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.882387]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4549.882393]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4549.882397]  ? __bio_clone_fast+0xa5/0xe0
>>>>>> [ 4549.882401]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.882403]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.882406]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4549.882410]  ? blk_throtl_charge_bio_split+0x23/0x60
>>>>>> [ 4549.882413]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4549.882415]  __submit_bio+0x18c/0x220
>>>>>> [ 4549.882417]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.882419]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4549.882421]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4549.882424]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4549.882428]  process_one_work+0x1d3/0x360
>>>>>> [ 4549.882431]  worker_thread+0x4d/0x3b0
>>>>>> [ 4549.882433]  ? process_one_work+0x360/0x360
>>>>>> [ 4549.882435]  kthread+0x115/0x140
>>>>>> [ 4549.882436]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4549.882438]  ret_from_fork+0x1f/0x30
>>>>>> [ 4549.882442]  </TASK>
>>>>>> [ 4549.882497] INFO: task .backy-wrapped:2578 blocked for more than 122 seconds.
>>>>>> [ 4549.890517]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.895611] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.904406] task:.backy-wrapped  state:D stack:    0 pid: 2578 ppid:     1 flags:0x00000002
>>>>>> [ 4549.904411] Call Trace:
>>>>>> [ 4549.904412]  <TASK>
>>>>>> [ 4549.904414]  __schedule+0x373/0x1580
>>>>>> [ 4549.904419]  ? xlog_cil_commit+0x556/0x880 [xfs]
>>>>>> [ 4549.904465]  ? __xfs_trans_commit+0xac/0x2f0 [xfs]
>>>>>> [ 4549.904498]  schedule+0x5b/0xe0
>>>>>> [ 4549.904500]  io_schedule+0x42/0x70
>>>>>> [ 4549.904503]  wait_on_page_bit_common+0x119/0x380
>>>>>> [ 4549.904507]  ? __page_cache_alloc+0x80/0x80
>>>>>> [ 4549.904510]  wait_on_page_writeback+0x22/0x70
>>>>>> [ 4549.904513]  truncate_inode_pages_range+0x26f/0x6d0
>>>>>> [ 4549.904520]  evict+0x15f/0x180
>>>>>> [ 4549.904524]  __dentry_kill+0xde/0x170
>>>>>> [ 4549.904527]  dput+0x139/0x320
>>>>>> [ 4549.904529]  do_renameat2+0x375/0x5f0
>>>>>> [ 4549.904536]  __x64_sys_rename+0x3f/0x50
>>>>>> [ 4549.904538]  do_syscall_64+0x34/0x80
>>>>>> [ 4549.904541]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>>>> [ 4549.904544] RIP: 0033:0x7fbf3e61a75b
>>>>>> [ 4549.904545] RSP: 002b:00007ffc61e25988 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>>>> [ 4549.904548] RAX: ffffffffffffffda RBX: 00007ffc61e25a20 RCX: 00007fbf3e61a75b
>>>>>> [ 4549.904549] RDX: 0000000000000000 RSI: 00007fbf2f7ff150 RDI: 00007fbf2f7fc190
>>>>>> [ 4549.904550] RBP: 00007ffc61e259d0 R08: 00000000ffffffff R09: 0000000000000000
>>>>>> [ 4549.904551] R10: 00007ffc61e25c00 R11: 0000000000000246 R12: 00000000ffffff9c
>>>>>> [ 4549.904552] R13: 00000000ffffff9c R14: 00000000016afab0 R15: 00007fbf30ef0810
>>>>>> [ 4549.904555]  </TASK>
>>>>>> [ 4549.904556] INFO: task kworker/u64:0:4372 blocked for more than 122 seconds.
>>>>>> [ 4549.912477]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.917573] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.926373] task:kworker/u64:0   state:D stack:    0 pid: 4372 ppid:     2 flags:0x00004000
>>>>>> [ 4549.926376] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4549.926380] Call Trace:
>>>>>> [ 4549.926381]  <TASK>
>>>>>> [ 4549.926383]  __schedule+0x373/0x1580
>>>>>> [ 4549.926386]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4549.926389]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4549.926392]  schedule+0x5b/0xe0
>>>>>> [ 4549.926394]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4549.926397]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.926401]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4549.926406]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4549.926410]  ? __bio_clone_fast+0xa5/0xe0
>>>>>> [ 4549.926413]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.926415]  ? __blk_queue_split+0x2d0/0x580
>>>>>> [ 4549.926418]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4549.926422]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4549.926424]  __submit_bio+0x18c/0x220
>>>>>> [ 4549.926426]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.926428]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4549.926431]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4549.926434]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4549.926437]  process_one_work+0x1d3/0x360
>>>>>> [ 4549.926441]  worker_thread+0x4d/0x3b0
>>>>>> [ 4549.926442]  ? process_one_work+0x360/0x360
>>>>>> [ 4549.926444]  kthread+0x115/0x140
>>>>>> [ 4549.926447]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4549.926448]  ret_from_fork+0x1f/0x30
>>>>>> [ 4549.926454]  </TASK>
>>>>>> [ 4549.926459] INFO: task rsync:4929 blocked for more than 122 seconds.
>>>>>> [ 4549.933603]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.938702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.947501] task:rsync           state:D stack:    0 pid: 4929 ppid:  4925 flags:0x00000000
>>>>>> [ 4549.947503] Call Trace:
>>>>>> [ 4549.947505]  <TASK>
>>>>>> [ 4549.947505]  ? usleep_range_state+0x90/0x90
>>>>>> [ 4549.947510]  __schedule+0x373/0x1580
>>>>>> [ 4549.947513]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.947515]  ? blk_mq_sched_insert_requests+0x97/0xe0
>>>>>> [ 4549.947519]  ? usleep_range_state+0x90/0x90
>>>>>> [ 4549.947521]  schedule+0x5b/0xe0
>>>>>> [ 4549.947523]  schedule_timeout+0xff/0x130
>>>>>> [ 4549.947526]  __wait_for_common+0xaf/0x160
>>>>>> [ 4549.947530]  xfs_buf_iowait+0x1c/0xa0 [xfs]
>>>>>> [ 4549.947573]  __xfs_buf_submit+0x109/0x1b0 [xfs]
>>>>>> [ 4549.947604]  xfs_buf_read_map+0x120/0x280 [xfs]
>>>>>> [ 4549.947635]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>>> [ 4549.947670]  xfs_trans_read_buf_map+0x156/0x2c0 [xfs]
>>>>>> [ 4549.947705]  ? xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>>> [ 4549.947735]  xfs_btree_read_buf_block.constprop.0+0xae/0xf0 [xfs]
>>>>>> [ 4549.947764]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.947766]  xfs_btree_lookup_get_block+0xa2/0x180 [xfs]
>>>>>> [ 4549.947798]  xfs_btree_lookup+0xe9/0x540 [xfs]
>>>>>> [ 4549.947830]  xfs_alloc_lookup_eq+0x1d/0x30 [xfs]
>>>>>> [ 4549.947863]  xfs_alloc_fixup_trees+0xe7/0x3b0 [xfs]
>>>>>> [ 4549.947893]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
>>>>>> [ 4549.947923]  xfs_alloc_ag_vextent_near.constprop.0+0x3f2/0x4a0 [xfs]
>>>>>> [ 4549.947954]  xfs_alloc_ag_vextent+0x13f/0x150 [xfs]
>>>>>> [ 4549.947983]  xfs_alloc_vextent+0x327/0x450 [xfs]
>>>>>> [ 4549.948013]  xfs_bmap_btalloc+0x44e/0x830 [xfs]
>>>>>> [ 4549.948047]  xfs_bmapi_allocate+0xda/0x300 [xfs]
>>>>>> [ 4549.948076]  xfs_bmapi_write+0x4ab/0x570 [xfs]
>>>>>> [ 4549.948109]  xfs_da_grow_inode_int+0xd8/0x320 [xfs]
>>>>>> [ 4549.948141]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.948142]  ? xfs_da_read_buf+0xf7/0x150 [xfs]
>>>>>> [ 4549.948171]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.948174]  xfs_dir2_grow_inode+0x68/0x120 [xfs]
>>>>>> [ 4549.948204]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.948206]  xfs_dir2_node_addname+0x5ea/0x9e0 [xfs]
>>>>>> [ 4549.948241]  xfs_dir_createname+0x1cf/0x1e0 [xfs]
>>>>>> [ 4549.948271]  xfs_rename+0x87e/0xcd0 [xfs]
>>>>>> [ 4549.948308]  xfs_vn_rename+0xfa/0x170 [xfs]
>>>>>> [ 4549.948340]  vfs_rename+0x818/0x10d0
>>>>>> [ 4549.948345]  ? lookup_dcache+0x17/0x60
>>>>>> [ 4549.948348]  ? do_renameat2+0x57f/0x5f0
>>>>>> [ 4549.948350]  do_renameat2+0x57f/0x5f0
>>>>>> [ 4549.948355]  __x64_sys_rename+0x3f/0x50
>>>>>> [ 4549.948357]  do_syscall_64+0x34/0x80
>>>>>> [ 4549.948360]  entry_SYSCALL_64_after_hwframe+0x6c/0xd6
>>>>>> [ 4549.948362] RIP: 0033:0x7fcc5520c1d7
>>>>>> [ 4549.948364] RSP: 002b:00007ffe3909c748 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
>>>>>> [ 4549.948366] RAX: ffffffffffffffda RBX: 00007ffe3909c8f0 RCX: 00007fcc5520c1d7
>>>>>> [ 4549.948367] RDX: 0000000000000000 RSI: 00007ffe3909c8f0 RDI: 00007ffe3909e8f0
>>>>>> [ 4549.948368] RBP: 00007ffe3909e8f0 R08: 0000000000000000 R09: 00007ffe3909c2f8
>>>>>> [ 4549.948369] R10: 00007ffe3909c2f7 R11: 0000000000000246 R12: 0000000000000000
>>>>>> [ 4549.948370] R13: 00000000023c9c30 R14: 00000000000081a4 R15: 0000000000000004
>>>>>> [ 4549.948373]  </TASK>
>>>>>> [ 4549.948374] INFO: task kworker/u64:1:4930 blocked for more than 122 seconds.
>>>>>> [ 4549.956299]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.961396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.970198] task:kworker/u64:1   state:D stack:    0 pid: 4930 ppid:     2 flags:0x00004000
>>>>>> [ 4549.970202] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4549.970205] Call Trace:
>>>>>> [ 4549.970206]  <TASK>
>>>>>> [ 4549.970209]  __schedule+0x373/0x1580
>>>>>> [ 4549.970211]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.970215]  schedule+0x5b/0xe0
>>>>>> [ 4549.970217]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4549.970219]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.970223]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4549.970229]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4549.970232]  ? kmem_cache_alloc_node_trace+0x341/0x3e0
>>>>>> [ 4549.970236]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.970238]  ? linear_map+0x44/0x90 [dm_mod]
>>>>>> [ 4549.970244]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.970245]  ? __blk_queue_split+0x516/0x580
>>>>>> [ 4549.970248]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4549.970251]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4549.970254]  __submit_bio+0x18c/0x220
>>>>>> [ 4549.970256]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.970258]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4549.970260]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4549.970263]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4549.970267]  process_one_work+0x1d3/0x360
>>>>>> [ 4549.970270]  worker_thread+0x4d/0x3b0
>>>>>> [ 4549.970272]  ? process_one_work+0x360/0x360
>>>>>> [ 4549.970274]  kthread+0x115/0x140
>>>>>> [ 4549.970276]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4549.970278]  ret_from_fork+0x1f/0x30
>>>>>> [ 4549.970282]  </TASK>
>>>>>> [ 4549.970284] INFO: task kworker/u64:2:4949 blocked for more than 123 seconds.
>>>>>> [ 4549.978205]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4549.983290] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4549.992088] task:kworker/u64:2   state:D stack:    0 pid: 4949 ppid:     2 flags:0x00004000
>>>>>> [ 4549.992093] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4549.992097] Call Trace:
>>>>>> [ 4549.992098]  <TASK>
>>>>>> [ 4549.992100]  __schedule+0x373/0x1580
>>>>>> [ 4549.992103]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4549.992106]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4549.992109]  schedule+0x5b/0xe0
>>>>>> [ 4549.992111]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4549.992114]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.992117]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4549.992122]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4549.992125]  ? kmem_cache_alloc+0x261/0x3b0
>>>>>> [ 4549.992129]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.992131]  ? linear_map+0x44/0x90 [dm_mod]
>>>>>> [ 4549.992135]  ? finish_wait+0x90/0x90
>>>>>> [ 4549.992137]  ? __blk_queue_split+0x516/0x580
>>>>>> [ 4549.992139]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4549.992142]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4549.992144]  __submit_bio+0x18c/0x220
>>>>>> [ 4549.992146]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4549.992148]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4549.992150]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4549.992153]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4549.992157]  process_one_work+0x1d3/0x360
>>>>>> [ 4549.992160]  worker_thread+0x4d/0x3b0
>>>>>> [ 4549.992162]  ? process_one_work+0x360/0x360
>>>>>> [ 4549.992163]  kthread+0x115/0x140
>>>>>> [ 4549.992166]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4549.992168]  ret_from_fork+0x1f/0x30
>>>>>> [ 4549.992172]  </TASK>
>>>>>> [ 4549.992174] INFO: task kworker/u64:5:4952 blocked for more than 123 seconds.
>>>>>> [ 4550.000095]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4550.005187] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4550.013985] task:kworker/u64:5   state:D stack:    0 pid: 4952 ppid:     2 flags:0x00004000
>>>>>> [ 4550.013988] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4550.013992] Call Trace:
>>>>>> [ 4550.013993]  <TASK>
>>>>>> [ 4550.013995]  __schedule+0x373/0x1580
>>>>>> [ 4550.013997]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4550.014000]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4550.014003]  schedule+0x5b/0xe0
>>>>>> [ 4550.014005]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4550.014008]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.014010]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4550.014015]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4550.014018]  ? __bio_clone_fast+0xa5/0xe0
>>>>>> [ 4550.014022]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.014024]  ? __blk_queue_split+0x2d0/0x580
>>>>>> [ 4550.014027]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4550.014030]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4550.014032]  __submit_bio+0x18c/0x220
>>>>>> [ 4550.014034]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4550.014036]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4550.014038]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4550.014041]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4550.014044]  process_one_work+0x1d3/0x360
>>>>>> [ 4550.014047]  worker_thread+0x4d/0x3b0
>>>>>> [ 4550.014049]  ? process_one_work+0x360/0x360
>>>>>> [ 4550.014050]  kthread+0x115/0x140
>>>>>> [ 4550.014052]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4550.014054]  ret_from_fork+0x1f/0x30
>>>>>> [ 4550.014058]  </TASK>
>>>>>> [ 4550.014059] INFO: task kworker/u64:8:4954 blocked for more than 123 seconds.
>>>>>> [ 4550.021982]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4550.027078] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4550.035881] task:kworker/u64:8   state:D stack:    0 pid: 4954 ppid:     2 flags:0x00004000
>>>>>> [ 4550.035884] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4550.035887] Call Trace:
>>>>>> [ 4550.035888]  <TASK>
>>>>>> [ 4550.035890]  __schedule+0x373/0x1580
>>>>>> [ 4550.035893]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4550.035896]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4550.035899]  schedule+0x5b/0xe0
>>>>>> [ 4550.035901]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4550.035904]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.035907]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4550.035912]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4550.035916]  ? __bio_clone_fast+0xa5/0xe0
>>>>>> [ 4550.035919]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.035921]  ? __blk_queue_split+0x2d0/0x580
>>>>>> [ 4550.035924]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4550.035927]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4550.035929]  __submit_bio+0x18c/0x220
>>>>>> [ 4550.035931]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4550.035933]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4550.035936]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4550.035939]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4550.035942]  process_one_work+0x1d3/0x360
>>>>>> [ 4550.035946]  worker_thread+0x4d/0x3b0
>>>>>> [ 4550.035948]  ? process_one_work+0x360/0x360
>>>>>> [ 4550.035949]  kthread+0x115/0x140
>>>>>> [ 4550.035951]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4550.035953]  ret_from_fork+0x1f/0x30
>>>>>> [ 4550.035957]  </TASK>
>>>>>> [ 4550.035958] INFO: task kworker/u64:9:4955 blocked for more than 123 seconds.
>>>>>> [ 4550.043881]       Not tainted 5.15.164 #1-NixOS
>>>>>> [ 4550.048979] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> [ 4550.057786] task:kworker/u64:9   state:D stack:    0 pid: 4955 ppid:     2 flags:0x00004000
>>>>>> [ 4550.057790] Workqueue: kcryptd/253:4 kcryptd_crypt [dm_crypt]
>>>>>> [ 4550.057794] Call Trace:
>>>>>> [ 4550.057796]  <TASK>
>>>>>> [ 4550.057798]  __schedule+0x373/0x1580
>>>>>> [ 4550.057801]  ? sysvec_apic_timer_interrupt+0xa/0x90
>>>>>> [ 4550.057803]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
>>>>>> [ 4550.057806]  schedule+0x5b/0xe0
>>>>>> [ 4550.057808]  md_bitmap_startwrite+0x177/0x1e0
>>>>>> [ 4550.057810]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.057813]  add_stripe_bio+0x449/0x770 [raid456]
>>>>>> [ 4550.057818]  raid5_make_request+0x1cf/0xbd0 [raid456]
>>>>>> [ 4550.057821]  ? __bio_clone_fast+0xa5/0xe0
>>>>>> [ 4550.057824]  ? finish_wait+0x90/0x90
>>>>>> [ 4550.057826]  ? __blk_queue_split+0x2d0/0x580
>>>>>> [ 4550.057828]  md_handle_request+0x11f/0x1b0
>>>>>> [ 4550.057831]  md_submit_bio+0x6e/0xb0
>>>>>> [ 4550.057834]  __submit_bio+0x18c/0x220
>>>>>> [ 4550.057835]  ? srso_alias_return_thunk+0x5/0x7f
>>>>>> [ 4550.057837]  ? crypt_page_alloc+0x46/0x60 [dm_crypt]
>>>>>> [ 4550.057839]  submit_bio_noacct+0xbe/0x2d0
>>>>>> [ 4550.057842]  kcryptd_crypt+0x3a8/0x5a0 [dm_crypt]
>>>>>> [ 4550.057846]  process_one_work+0x1d3/0x360
>>>>>> [ 4550.057848]  worker_thread+0x4d/0x3b0
>>>>>> [ 4550.057850]  ? process_one_work+0x360/0x360
>>>>>> [ 4550.057852]  kthread+0x115/0x140
>>>>>> [ 4550.057854]  ? set_kthread_struct+0x50/0x50
>>>>>> [ 4550.057856]  ret_from_fork+0x1f/0x30
>>>>>> [ 4550.057860]  </TASK>
>>>>> 
>>>>> 
>>>>>> On 7. Aug 2024, at 08:46, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> I tried updating to 5.15.164, but have to struggle against our config management as some options have been shifted that I need to filter out: NFSD_V3 and NFSD2_ACL are now fixed and cause config errors if set - I guess that’s a valid thing to happen within an LTS release. I’ll try again on Friday
>>>>>> 
>>>>>>>> On 7. Aug 2024, at 07:31, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>> 
>>>>>>>> Sure,
>>>>>>>> 
>>>>>>>> would you prefer me testing on 5.15.x or something else?
>>>>>>>> 
>>>>>>>> On 7. Aug 2024, at 04:55, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> 在 2024/08/06 22:10, Christian Theune 写道:
>>>>>>>>>> we are seeing an issue that can be triggered with relative ease on a server that has been working fine for a few weeks. The regular workload is a backup utility that copies off data from virtual disk images in 4MiB (compressed) chunks from Ceph onto a local NVME-based RAID-6 array that is encrypted using LUKS.
>>>>>>>>>> Today I started a larger rsync job from another server (that has a couple of million files with around 200-300 gib in total) to migrate data and we’ve seen the server suddenly lock up twice. Any IO that interacts with the mountpoint (/srv/backy) will hang indefinitely. A reset is required to get out of this as the machine will hang trying to unmount the affected filesystem. No other messages than the hung tasks are being presented - I have no indicator for hardware faults at the moment.
>>>>>>>>>> I’m messaging both dm-devel and linux-raid as I’m suspecting either one or both (or an interaction) might be the cause.
>>>>>>>>>> Kernel:
>>>>>>>>>> Linux version 5.15.138 (nixbld@localhost) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.40) #1-NixOS SMP Wed Nov 8 16:26:52 UTC 2023
>>>>>>>> 
>>>>>>>> Since you can trigger this easily, I'll suggest you to try the latest
>>>>>>>> kernel release first.
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Kuai
>>>>>>>> 
>>>>>>>>>> See the kernel config attached.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Liebe Grüße,
>>>>>>>> Christian Theune
>>>>>>>> 
>>>>>>>> -- 
>>>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>>> Liebe Grüße,
>>>>>> Christian Theune
>>>>>> 
>>>>>> -- 
>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>> 
>>>>> 
>>>>>> Liebe Grüße,
>>>>>> Christian Theune
>>>>> 
>>>>>> -- 
>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>>> 
>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-19 21:05                 ` John Stoffel
  2024-08-24 16:56                   ` tihmstar
@ 2024-08-24 18:12                   ` Dragan Milivojević
  2024-08-27  1:27                     ` John Stoffel
  1 sibling, 1 reply; 88+ messages in thread
From: Dragan Milivojević @ 2024-08-24 18:12 UTC (permalink / raw)
  To: John Stoffel
  Cc: tihmstar, Christian Theune, Yu Kuai, linux-raid@vger.kernel.org,
	dm-devel, yukuai (C)

> Interesting.  Why this way?  It would seem you now have to enter N
> passwords on bootup, instead of just one.

On RedHat based distributions, by default, a single entry will unlock
multiple devices.

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 19:13                                   ` Christian Theune
@ 2024-08-26 14:38                                     ` Christian Theune
  0 siblings, 0 replies; 88+ messages in thread
From: Christian Theune @ 2024-08-26 14:38 UTC (permalink / raw)
  To: John Stoffel; +Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

I still haven’t been able to create a reproducer based on a tmpfs setup. I’ve run the xfstests with increased load and time factors, but didn’t trigger a crash.

@yukuai - I’m running out of ideas. Personally my preferred next step would be to gather debug output. If that takes some time, then I’ll remain patient. :)

Something I do have on the horizon: I’ll receive a mostly identical server in the next weeks and can try to reproduce the issue there, taking a few disks aside for a separate debugging array.  Maybe the number of disks is also relevant, so I’ll also try with the full size.

> On 15. Aug 2024, at 21:13, Christian Theune <ct@flyingcircus.io> wrote:
> 
>>> On the plus side, I have a script now that can create the various
>>> loopback settings quickly, so I can try out things as needed. Not
>>> that valuable without a reproducer, yet, though.
>> 
>> Yay!  Please share it.
> 
> Will do next week after a bit of cleanup.

Here’s the setup I’ve been using with tmpfs backed software raid:

mkdir /srv/test-raid/
mkdir /srv/test-raid/backing

mount -t tmpfs none /srv/test-raid/backing

loops=()

for i in {0..3}; do 
    dd if=/dev/zero of=/srv/test-raid/backing/img${i}.bin bs=1M seek=1100 count=1
    loops+=($(losetup -f /srv/test-raid/backing/img${i}.bin --show))
done

mdadm --create /dev/md/test --level=6 --raid-devices=4 ${loops[@]}

dd if=/dev/zero of=/srv/test-raid/backing/scratch.bin bs=1M seek=1100 count=1
SCRATCH_DEV=$(losetup -f /srv/test-raid/backing/scratch.bin --show)
loops+=($SCRATCH_DEV)

mkfs.xfs /dev/md/test

mkdir /srv/test-raid/scratch
mkdir /srv/test-raid/test
#mount /dev/md/test /srv/test-raid/test

export TEST_DEV=$(realpath /dev/md/test)
export TEST_DIR=/srv/test-raid/test
# export SCRATCH_DEV=/dev/loop1 # see above
export SCRATCH_MNT=/srv/test-raid/scratch

export LOAD_FACTOR=10
export TIME_FACTOR=10
# export SOAK_DURATION=1m

xfstests-check

# cleanup

umount /srv/test-raid/test
mdadm --stop /dev/md/test
for x in "${loops[@]}"; do losetup -d $x; done

umount /srv/test-raid/backing

rm -r /srv/test-raid


Hugs,
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-24 18:12                   ` Dragan Milivojević
@ 2024-08-27  1:27                     ` John Stoffel
  0 siblings, 0 replies; 88+ messages in thread
From: John Stoffel @ 2024-08-27  1:27 UTC (permalink / raw)
  To: Dragan Milivojević
  Cc: John Stoffel, tihmstar, Christian Theune, Yu Kuai,
	linux-raid@vger.kernel.org, dm-devel, yukuai (C)

>>>>> "Dragan" == Dragan Milivojević <galileo@pkm-inc.com> writes:

>> Interesting.  Why this way?  It would seem you now have to enter N
>> passwords on bootup, instead of just one.

> On RedHat based distributions, by default, a single entry will
> unlock multiple devices.

Something new to learn!  Thanks,

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-08-15 11:14                                   ` Yu Kuai
  2024-08-15 11:24                                     ` Christian Theune
@ 2024-10-22 15:02                                     ` Christian Theune
  2024-10-23  1:13                                       ` Yu Kuai
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-22 15:02 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

I had to put this issue aside and as Yu indicated he was busy I didn’t follow up yet.

@Yu: I don’t have new insights, but I have a basically identical machine that I will start adding new data with a similar structure soon. 

I couldn’t directly reproduce the issue there - likely because the network is a bit slower as it’s connected from a remote side and has only 1G instead of 10G, due to the long distances.

Let me know if you’re interested in following up here and I’ll try to make room on my side to get you more input as needed.

Christian

> On 15. Aug 2024, at 13:14, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/08/15 18:03, Christian Theune 写道:
>> Hi,
>> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
>> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
>> It sounds like the specific pattern relies on XFS doing a specific thing there …
>> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
>> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
> 
> That sounds greate.
>>  Christian
>>> On 15. Aug 2024, at 08:19, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Hi,
>>> 
>>>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>>>> 
>>>>> I'd probably just do the RAID6 tests first, get them out of the way.
>>>> 
>>>> Alright, those are running right now - I’ll let you know what happens.
>>> 
>>> I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(
>>> 
>>> I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that
>>> portion.
>>> 
>>> On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.
>>> 
>>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
> 
> Yes, however, I still need some time to sort out the internal process of
> raid5. I'm quite busy with some other work stuff and I'm familiar with
> raid1/10, but not too much about raid5. :(
> 
> Main idea is to figure out why IO are not dispatched to underlying
> disks.
> 
> Thanks,
> Kuai
> 
>>> 
>>> Christian
>>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> Liebe Grüße,
>> Christian Theune


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-22 15:02                                     ` Christian Theune
@ 2024-10-23  1:13                                       ` Yu Kuai
  2024-10-23  6:03                                         ` Christian Theune
  2024-10-25  8:39                                         ` Christian Theune
  0 siblings, 2 replies; 88+ messages in thread
From: Yu Kuai @ 2024-10-23  1:13 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

在 2024/10/22 23:02, Christian Theune 写道:
> Hi,
> 
> I had to put this issue aside and as Yu indicated he was busy I didn’t follow up yet.
> 
> @Yu: I don’t have new insights, but I have a basically identical machine that I will start adding new data with a similar structure soon.
> 
> I couldn’t directly reproduce the issue there - likely because the network is a bit slower as it’s connected from a remote side and has only 1G instead of 10G, due to the long distances.
> 
> Let me know if you’re interested in following up here and I’ll try to make room on my side to get you more input as needed.

Yes, sorry that I was totally busy with other things. :(

BTW, what is the result after bypassing bitmap(disable bitmap by
kernel hacking)?

Thanks,
Kuai

> 
> Christian
> 
>> On 15. Aug 2024, at 13:14, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/08/15 18:03, Christian Theune 写道:
>>> Hi,
>>> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
>>> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
>>> It sounds like the specific pattern relies on XFS doing a specific thing there …
>>> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
>>> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
>>
>> That sounds greate.
>>>   Christian
>>>> On 15. Aug 2024, at 08:19, Christian Theune <ct@flyingcircus.io> wrote:
>>>>
>>>> Hi,
>>>>
>>>>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>>>>>
>>>>>> I'd probably just do the RAID6 tests first, get them out of the way.
>>>>>
>>>>> Alright, those are running right now - I’ll let you know what happens.
>>>>
>>>> I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(
>>>>
>>>> I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that
>>>> portion.
>>>>
>>>> On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.
>>>>
>>>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
>>
>> Yes, however, I still need some time to sort out the internal process of
>> raid5. I'm quite busy with some other work stuff and I'm familiar with
>> raid1/10, but not too much about raid5. :(
>>
>> Main idea is to figure out why IO are not dispatched to underlying
>> disks.
>>
>> Thanks,
>> Kuai
>>
>>>>
>>>> Christian
>>>>
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> Liebe Grüße,
>>> Christian Theune
> 
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-23  1:13                                       ` Yu Kuai
@ 2024-10-23  6:03                                         ` Christian Theune
  2024-10-23 17:50                                           ` Christian Theune
  2024-10-25  8:39                                         ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-23  6:03 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

> On 23. Oct 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/10/22 23:02, Christian Theune 写道:
>> Hi,
>> I had to put this issue aside and as Yu indicated he was busy I didn’t follow up yet.
>> @Yu: I don’t have new insights, but I have a basically identical machine that I will start adding new data with a similar structure soon.
>> I couldn’t directly reproduce the issue there - likely because the network is a bit slower as it’s connected from a remote side and has only 1G instead of 10G, due to the long distances.
>> Let me know if you’re interested in following up here and I’ll try to make room on my side to get you more input as needed.
> 
> Yes, sorry that I was totally busy with other things. :(
> 
> BTW, what is the result after bypassing bitmap(disable bitmap by
> kernel hacking)?

I couldn’t follow up on this as the machine that I can reproduce this with has production data. I hope that I can trigger the issue again with the new machine (didn’t happen so far) and then apply the patch before it has production data.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-23  6:03                                         ` Christian Theune
@ 2024-10-23 17:50                                           ` Christian Theune
  0 siblings, 0 replies; 88+ messages in thread
From: Christian Theune @ 2024-10-23 17:50 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

“good” news: my new machine also has the issue with 5.15.138 (still our current default kernel).

I can try upgrading and applying the bitmap patch in the next days.

Christian

> On 23. Oct 2024, at 08:03, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
>> On 23. Oct 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> Hi,
>> 
>> 在 2024/10/22 23:02, Christian Theune 写道:
>>> Hi,
>>> I had to put this issue aside and as Yu indicated he was busy I didn’t follow up yet.
>>> @Yu: I don’t have new insights, but I have a basically identical machine that I will start adding new data with a similar structure soon.
>>> I couldn’t directly reproduce the issue there - likely because the network is a bit slower as it’s connected from a remote side and has only 1G instead of 10G, due to the long distances.
>>> Let me know if you’re interested in following up here and I’ll try to make room on my side to get you more input as needed.
>> 
>> Yes, sorry that I was totally busy with other things. :(
>> 
>> BTW, what is the result after bypassing bitmap(disable bitmap by
>> kernel hacking)?
> 
> I couldn’t follow up on this as the machine that I can reproduce this with has production data. I hope that I can trigger the issue again with the new machine (didn’t happen so far) and then apply the patch before it has production data.
> 
> Christian
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-23  1:13                                       ` Yu Kuai
  2024-10-23  6:03                                         ` Christian Theune
@ 2024-10-25  8:39                                         ` Christian Theune
  2024-10-25 13:31                                           ` Dragan Milivojević
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-25  8:39 UTC (permalink / raw)
  To: Yu Kuai; +Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C)

Hi,

I’m working on getting a test without bitmap. To make things simple for myself: is it helpful if I just use “mdadm --grow --bitmap=none” to disable it or is that futile?

Christian

> On 23. Oct 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/10/22 23:02, Christian Theune 写道:
>> Hi,
>> I had to put this issue aside and as Yu indicated he was busy I didn’t follow up yet.
>> @Yu: I don’t have new insights, but I have a basically identical machine that I will start adding new data with a similar structure soon.
>> I couldn’t directly reproduce the issue there - likely because the network is a bit slower as it’s connected from a remote side and has only 1G instead of 10G, due to the long distances.
>> Let me know if you’re interested in following up here and I’ll try to make room on my side to get you more input as needed.
> 
> Yes, sorry that I was totally busy with other things. :(
> 
> BTW, what is the result after bypassing bitmap(disable bitmap by
> kernel hacking)?
> 
> Thanks,
> Kuai
> 
>> Christian
>>> On 15. Aug 2024, at 13:14, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> 
>>> Hi,
>>> 
>>> 在 2024/08/15 18:03, Christian Theune 写道:
>>>> Hi,
>>>> small insight: even given my dataset that can reliably trigger this (after around 1.5 hours of rsyncing) it does not trigger on a specific set of files. I’ve deleted the data and started the rsync on a fresh directory (not a fresh filesystem, I can’t delete that as it carries important data) but it doesn’t always get stuck on the same files, even though rsync processes them in a repeatable order.
>>>> I’m wondering how to generate more insights from that. Maybe keeping a blktrace log might help?
>>>> It sounds like the specific pattern relies on XFS doing a specific thing there …
>>>> Wild idea: maybe running the xfstest suite on an in-memory raid 6 setup could reproduce this?
>>>> I’m guessing that the xfs people do not regularly run their test suite on a layered setup like mine with encryption and software raid?
>>> 
>>> That sounds greate.
>>>>  Christian
>>>>> On 15. Aug 2024, at 08:19, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>>> On 14. Aug 2024, at 10:53, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>>> On 12. Aug 2024, at 20:37, John Stoffel <john@stoffel.org> wrote:
>>>>>>> 
>>>>>>> I'd probably just do the RAID6 tests first, get them out of the way.
>>>>>> 
>>>>>> Alright, those are running right now - I’ll let you know what happens.
>>>>> 
>>>>> I’m not making progress here. I can’t reproduce those on in-memory loopback raid 6. However: i can’t fully produce the rsync. For me this only triggered after around 1.5hs of progress on the NVMe which resulted in the hangup. I can only create around 20 GiB worth of raid 6 volume on this machine. I’ve tried running rsync until it exhausts the space, deleting the content and running rsync again, but I feel like this isn’t suffient to trigger the issue. :(
>>>>> 
>>>>> I’m trying to find whether any specific pattern in the files around the time it locks up might be relevant here and try to run the rsync over that
>>>>> portion.
>>>>> 
>>>>> On the plus side, I have a script now that can create the various loopback settings quickly, so I can try out things as needed. Not that valuable without a reproducer, yet, though.
>>>>> 
>>>>> @Yu: you mentioned that you might be able to provide me a kernel that produces more error logging to diagnose this? Any chance we could try that route?
>>> 
>>> Yes, however, I still need some time to sort out the internal process of
>>> raid5. I'm quite busy with some other work stuff and I'm familiar with
>>> raid1/10, but not too much about raid5. :(
>>> 
>>> Main idea is to figure out why IO are not dispatched to underlying
>>> disks.
>>> 
>>> Thanks,
>>> Kuai
>>> 
>>>>> 
>>>>> Christian
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> Liebe Grüße,
>>>> Christian Theune
>> Liebe Grüße,
>> Christian Theune


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-25  8:39                                         ` Christian Theune
@ 2024-10-25 13:31                                           ` Dragan Milivojević
  2024-10-25 14:02                                             ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Dragan Milivojević @ 2024-10-25 13:31 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

Take this with a grain of salt,
since I'm not a kernel developer:

If you can trigger the issue with bitmap=none
I would say that is a valuable data point.

On Fri, 25 Oct 2024 at 10:40, Christian Theune <ct@flyingcircus.io> wrote:
>
> Hi,
>
> I’m working on getting a test without bitmap. To make things simple for myself: is it helpful if I just use “mdadm --grow --bitmap=none” to disable it or is that futile?
>
> Christian
>
> > On 23. Oct 2024, at 03:13, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-25 13:31                                           ` Dragan Milivojević
@ 2024-10-25 14:02                                             ` Christian Theune
  2024-10-26  5:37                                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-25 14:02 UTC (permalink / raw)
  To: Dragan Milivojević
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

Hi,

> On 25. Oct 2024, at 15:31, Dragan Milivojević <galileo@pkm-inc.com> wrote:
> 
> Take this with a grain of salt,
> since I'm not a kernel developer:
> 
> If you can trigger the issue with bitmap=none
> I would say that is a valuable data point.

Yeah, this was more directed towards the question whether Yu needs me to run the patch that he posted earlier.

So. The current status is: previously this crashed within 2-3 hours. Both machines are now running with the bitmap turned off as described above and have been syncing data for about 7 hours. This seems to indicate that the bitmap is involved here.

@Yu: let me know what next step you’d like to get from me.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-25 14:02                                             ` Christian Theune
@ 2024-10-26  5:37                                               ` Christian Theune
  2024-10-26  9:07                                                 ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-26  5:37 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel, yukuai (C),
	Dragan Milivojević


> On 25. Oct 2024, at 16:02, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Yeah, this was more directed towards the question whether Yu needs me to run the patch that he posted earlier.
> 
> So. The current status is: previously this crashed within 2-3 hours. Both machines are now running with the bitmap turned off as described above and have been syncing data for about 7 hours. This seems to indicate that the bitmap is involved here.

Update: both machines have been able to finish their multi-TiB rsync job that previously caused reliable lockups. So: the bitmap code seems to be the culprit here … 

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-26  5:37                                               ` Christian Theune
@ 2024-10-26  9:07                                                 ` Yu Kuai
  2024-10-26 11:51                                                   ` Christian Theune
  2024-10-26 12:07                                                   ` Christian Theune
  0 siblings, 2 replies; 88+ messages in thread
From: Yu Kuai @ 2024-10-26  9:07 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/10/26 13:37, Christian Theune 写道:
> 
>> On 25. Oct 2024, at 16:02, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Yeah, this was more directed towards the question whether Yu needs me to run the patch that he posted earlier.
>>
>> So. The current status is: previously this crashed within 2-3 hours. Both machines are now running with the bitmap turned off as described above and have been syncing data for about 7 hours. This seems to indicate that the bitmap is involved here.
> 
> Update: both machines have been able to finish their multi-TiB rsync job that previously caused reliable lockups. So: the bitmap code seems to be the culprit here …
> 
> Christian
> 

Then, can you enable bitmap and test the following debug patch:

Thanks,
Kuai

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 58f71c3e1368..b2a75a904209 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2369,6 +2369,7 @@ static struct stripe_head *alloc_stripe(struct 
kmem_cache *sc, gfp_t gfp,
                 atomic_set(&sh->count, 1);
                 sh->raid_conf = conf;
                 sh->log_start = MaxSector;
+               atomic_set(&sh->bitmap_counts, 0);

                 if (raid5_has_ppl(conf)) {
                         sh->ppl_page = alloc_page(gfp);
@@ -3565,6 +3566,7 @@ static void __add_stripe_bio(struct stripe_head 
*sh, struct bio *bi,
                 spin_unlock_irq(&sh->stripe_lock);
                 conf->mddev->bitmap_ops->startwrite(conf->mddev, 
sh->sector,
                                         RAID5_STRIPE_SECTORS(conf), false);
+               printk("%s: %s: start %px(%llu+%lu) %u\n", __func__, 
mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), 
atomic_inc_return(&sh->bitmap_counts));
                 spin_lock_irq(&sh->stripe_lock);
                 clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
                 if (!sh->batch_head) {
@@ -3662,10 +3664,12 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                         bio_io_error(bi);
                         bi = nextbi;
                 }
-               if (bitmap_end)
+               if (bitmap_end) {
                         conf->mddev->bitmap_ops->endwrite(conf->mddev,
                                         sh->sector, 
RAID5_STRIPE_SECTORS(conf),
                                         false, false);
+                       printk("%s: %s: end %px(%llu+%lu) %u\n", 
__func__, mdname(conf->mddev), sh, sh->sector, 
RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
+               }
                 bitmap_end = 0;
                 /* and fail all 'written' */
                 bi = sh->dev[i].written;
@@ -3709,10 +3713,12 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                                 bi = nextbi;
                         }
                 }
-               if (bitmap_end)
+               if (bitmap_end) {
                         conf->mddev->bitmap_ops->endwrite(conf->mddev,
                                         sh->sector, 
RAID5_STRIPE_SECTORS(conf),
                                         false, false);
+                       printk("%s: %s: end %px(%llu+%lu) %u\n", 
__func__, mdname(conf->mddev), sh, sh->sector, 
RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
+               }
                 /* If we were in the middle of a write the parity block 
might
                  * still be locked - so just clear all R5_LOCKED flags
                  */
@@ -4065,6 +4071,7 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                                         sh->sector, 
RAID5_STRIPE_SECTORS(conf),
                                         !test_bit(STRIPE_DEGRADED, 
&sh->state),
                                         false);
+                               printk("%s: %s: end %px(%llu+%lu) %u\n", 
__func__, mdname(conf->mddev), sh, sh->sector, 
RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
                                 if (head_sh->batch_head) {
                                         sh = 
list_first_entry(&sh->batch_list,
                                                               struct 
stripe_head,
@@ -5785,9 +5792,11 @@ static void make_discard_request(struct mddev 
*mddev, struct bio *bi)
                 spin_unlock_irq(&sh->stripe_lock);
                 if (conf->mddev->bitmap) {
                         for (d = 0; d < conf->raid_disks - 
conf->max_degraded;
-                            d++)
+                            d++) {
                                 mddev->bitmap_ops->startwrite(mddev, 
sh->sector,
                                         RAID5_STRIPE_SECTORS(conf), false);
+                               printk("%s: %s: start %px(%llu+%lu) 
%u\n", __func__, mdname(conf->mddev), sh, sh->sector, 
RAID5_STRIPE_SECTORS(conf), atomic_inc_return(&sh->bitmap_counts));
+                       }
                         sh->bm_seq = conf->seq_flush + 1;
                         set_bit(STRIPE_BIT_DELAY, &sh->state);
                 }
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 896ecfc4afa6..12024249245e 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -255,6 +255,7 @@ struct stripe_head {
         int     nr_pages;       /* page array size */
         int     stripes_per_page;
  #endif
+       atomic_t bitmap_counts;
         struct r5dev {
                 /* rreq and rvec are used for the replacement device when
                  * writing data to both devices.


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-26  9:07                                                 ` Yu Kuai
@ 2024-10-26 11:51                                                   ` Christian Theune
  2024-10-26 12:07                                                   ` Christian Theune
  1 sibling, 0 replies; 88+ messages in thread
From: Christian Theune @ 2024-10-26 11:51 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)



> On 26. Oct 2024, at 11:07, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> 
> Then, can you enable bitmap and test the following debug patch:

Thanks, trying on 6.11.5 now.

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-26  9:07                                                 ` Yu Kuai
  2024-10-26 11:51                                                   ` Christian Theune
@ 2024-10-26 12:07                                                   ` Christian Theune
  2024-10-26 12:11                                                     ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-26 12:07 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

I can’t apply this on 6.10.5 and trying to manually reconstruct your patch lets me directly stumble into:

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index c14cf2410365..ce5466d4791a 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2366,7 +2366,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
                INIT_LIST_HEAD(&sh->lru);
                INIT_LIST_HEAD(&sh->r5c);
                INIT_LIST_HEAD(&sh->log_list);
-               atomic_set(&sh->count, 1);
+               atomic_set(&sh->count, 0);
                sh->raid_conf = conf;
                sh->log_start = MaxSector;

Which version is your patch based on?

Christian

> On 26. Oct 2024, at 11:07, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/10/26 13:37, Christian Theune 写道:
>>> On 25. Oct 2024, at 16:02, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Yeah, this was more directed towards the question whether Yu needs me to run the patch that he posted earlier.
>>> 
>>> So. The current status is: previously this crashed within 2-3 hours. Both machines are now running with the bitmap turned off as described above and have been syncing data for about 7 hours. This seems to indicate that the bitmap is involved here.
>> Update: both machines have been able to finish their multi-TiB rsync job that previously caused reliable lockups. So: the bitmap code seems to be the culprit here …
>> Christian
> 
> Then, can you enable bitmap and test the following debug patch:
> 
> Thanks,
> Kuai
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 58f71c3e1368..b2a75a904209 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -2369,6 +2369,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
>                atomic_set(&sh->count, 1);
>                sh->raid_conf = conf;
>                sh->log_start = MaxSector;
> +               atomic_set(&sh->bitmap_counts, 0);
> 
>                if (raid5_has_ppl(conf)) {
>                        sh->ppl_page = alloc_page(gfp);
> @@ -3565,6 +3566,7 @@ static void __add_stripe_bio(struct stripe_head *sh, struct bio *bi,
>                spin_unlock_irq(&sh->stripe_lock);
>                conf->mddev->bitmap_ops->startwrite(conf->mddev, sh->sector,
>                                        RAID5_STRIPE_SECTORS(conf), false);
> +               printk("%s: %s: start %px(%llu+%lu) %u\n", __func__, mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), atomic_inc_return(&sh->bitmap_counts));
>                spin_lock_irq(&sh->stripe_lock);
>                clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>                if (!sh->batch_head) {
> @@ -3662,10 +3664,12 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
>                        bio_io_error(bi);
>                        bi = nextbi;
>                }
> -               if (bitmap_end)
> +               if (bitmap_end) {
>                        conf->mddev->bitmap_ops->endwrite(conf->mddev,
>                                        sh->sector, RAID5_STRIPE_SECTORS(conf),
>                                        false, false);
> +                       printk("%s: %s: end %px(%llu+%lu) %u\n", __func__, mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
> +               }
>                bitmap_end = 0;
>                /* and fail all 'written' */
>                bi = sh->dev[i].written;
> @@ -3709,10 +3713,12 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
>                                bi = nextbi;
>                        }
>                }
> -               if (bitmap_end)
> +               if (bitmap_end) {
>                        conf->mddev->bitmap_ops->endwrite(conf->mddev,
>                                        sh->sector, RAID5_STRIPE_SECTORS(conf),
>                                        false, false);
> +                       printk("%s: %s: end %px(%llu+%lu) %u\n", __func__, mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
> +               }
>                /* If we were in the middle of a write the parity block might
>                 * still be locked - so just clear all R5_LOCKED flags
>                 */
> @@ -4065,6 +4071,7 @@ static void handle_stripe_clean_event(struct r5conf *conf,
>                                        sh->sector, RAID5_STRIPE_SECTORS(conf),
>                                        !test_bit(STRIPE_DEGRADED, &sh->state),
>                                        false);
> +                               printk("%s: %s: end %px(%llu+%lu) %u\n", __func__, mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), atomic_dec_return(&sh->bitmap_counts));
>                                if (head_sh->batch_head) {
>                                        sh = list_first_entry(&sh->batch_list,
>                                                              struct stripe_head,
> @@ -5785,9 +5792,11 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
>                spin_unlock_irq(&sh->stripe_lock);
>                if (conf->mddev->bitmap) {
>                        for (d = 0; d < conf->raid_disks - conf->max_degraded;
> -                            d++)
> +                            d++) {
>                                mddev->bitmap_ops->startwrite(mddev, sh->sector,
>                                        RAID5_STRIPE_SECTORS(conf), false);
> +                               printk("%s: %s: start %px(%llu+%lu) %u\n", __func__, mdname(conf->mddev), sh, sh->sector, RAID5_STRIPE_SECTORS(conf), atomic_inc_return(&sh->bitmap_counts));
> +                       }
>                        sh->bm_seq = conf->seq_flush + 1;
>                        set_bit(STRIPE_BIT_DELAY, &sh->state);
>                }
> diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
> index 896ecfc4afa6..12024249245e 100644
> --- a/drivers/md/raid5.h
> +++ b/drivers/md/raid5.h
> @@ -255,6 +255,7 @@ struct stripe_head {
>        int     nr_pages;       /* page array size */
>        int     stripes_per_page;
> #endif
> +       atomic_t bitmap_counts;
>        struct r5dev {
>                /* rreq and rvec are used for the replacement device when
>                 * writing data to both devices.


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-26 12:07                                                   ` Christian Theune
@ 2024-10-26 12:11                                                     ` Christian Theune
  2024-10-30  1:25                                                       ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-26 12:11 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)



> On 26. Oct 2024, at 14:07, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
> I can’t apply this on 6.10.5 and trying to manually reconstruct your patch lets me directly stumble into:

6.11.5 of course

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-26 12:11                                                     ` Christian Theune
@ 2024-10-30  1:25                                                       ` Yu Kuai
  2024-10-30  6:29                                                         ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-10-30  1:25 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/10/26 20:11, Christian Theune 写道:
> 
> 
>> On 26. Oct 2024, at 14:07, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Hi,
>>
>> I can’t apply this on 6.10.5 and trying to manually reconstruct your patch lets me directly stumble into:
> 
> 6.11.5 of course
> 

I cooked this based on v6.12-rc5, it's right there will be conflict for
6.11, can you manual adapta it in you version?

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-30  1:25                                                       ` Yu Kuai
@ 2024-10-30  6:29                                                         ` Christian Theune
  2024-10-31  7:48                                                           ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-30  6:29 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

> On 30. Oct 2024, at 02:25, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/10/26 20:11, Christian Theune 写道:
>>> On 26. Oct 2024, at 14:07, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Hi,
>>> 
>>> I can’t apply this on 6.10.5 and trying to manually reconstruct your patch lets me directly stumble into:
>> 6.11.5 of course
> 
> I cooked this based on v6.12-rc5, it's right there will be conflict for
> 6.11, can you manual adapta it in you version?

I will try, but I’m not sure. I don’t have a deep enough understanding to resolve some of the conflicts. In my previous mail I wasn’t sure which change would be the right one:

I guess if 6.12 doesn’t have this line at all:

-               atomic_set(&sh->count, 1);

… then setting it to 0 is fine?

+               atomic_set(&sh→count, 0);

But again, I have no idea what’s actually going on there … ;)

If you want I can try to wade through and give you a list of questions where the patch doesn’t obviously apply and you can let me know …

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-30  6:29                                                         ` Christian Theune
@ 2024-10-31  7:48                                                           ` Yu Kuai
  2024-10-31  8:04                                                             ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-10-31  7:48 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/10/30 14:29, Christian Theune 写道:
> Hi,
> 
>> On 30. Oct 2024, at 02:25, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/10/26 20:11, Christian Theune 写道:
>>>> On 26. Oct 2024, at 14:07, Christian Theune <ct@flyingcircus.io> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I can’t apply this on 6.10.5 and trying to manually reconstruct your patch lets me directly stumble into:
>>> 6.11.5 of course
>>
>> I cooked this based on v6.12-rc5, it's right there will be conflict for
>> 6.11, can you manual adapta it in you version?
> 
> I will try, but I’m not sure. I don’t have a deep enough understanding to resolve some of the conflicts. In my previous mail I wasn’t sure which change would be the right one:
> 
> I guess if 6.12 doesn’t have this line at all:
> 
> -               atomic_set(&sh->count, 1);
> 
> … then setting it to 0 is fine?
> 
> +               atomic_set(&sh→count, 0);

My patch doesn't touch this field at all, why make such change? This is
not OK.
> 
> But again, I have no idea what’s actually going on there … ;)
> 
> If you want I can try to wade through and give you a list of questions where the patch doesn’t obviously apply and you can let me know …

Perhaps can you try v6.12-rc5 directly? If not, I'll give a patch based
on v6.11 later.

Thanks,
Kuai

> 
> Christian
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-31  7:48                                                           ` Yu Kuai
@ 2024-10-31  8:04                                                             ` Christian Theune
  2024-10-31 15:07                                                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-31  8:04 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

> On 31. Oct 2024, at 08:48, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
>> I will try, but I’m not sure. I don’t have a deep enough understanding to resolve some of the conflicts. In my previous mail I wasn’t sure which change would be the right one:
>> I guess if 6.12 doesn’t have this line at all:
>> -               atomic_set(&sh->count, 1);
>> … then setting it to 0 is fine?
>> +               atomic_set(&sh→count, 0);
> 
> My patch doesn't touch this field at all, why make such change? This is
> not OK.

Yeah, patch didn’t think that’s OK either, that’s why I came back instead of trying to run that. ;)

Here’s the part of the patch I extracted from the earlier emails:

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 58f71c3e1368..b2a75a904209 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2369,6 +2369,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
               atomic_set(&sh->count, 1);
               sh->raid_conf = conf;
               sh->log_start = MaxSector;
+               atomic_set(&sh->bitmap_counts, 0);

… aaand I just noticed that patch got confused and tried to apply your change 3 lines early, so I ended up with a conflict - correctly! :)

>> But again, I have no idea what’s actually going on there … ;)
>> If you want I can try to wade through and give you a list of questions where the patch doesn’t obviously apply and you can let me know …
> 
> Perhaps can you try v6.12-rc5 directly? If not, I'll give a patch based
> on v6.11 later.

So. I’d like to avoid running 6.12rc5 and if it isn’t too much trouble I’d appreciate a 6.11 patch, but now that I understood what’s wrong I can try to create it myself in the next days.

Cheers,
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-31  8:04                                                             ` Christian Theune
@ 2024-10-31 15:07                                                               ` Christian Theune
  2024-10-31 19:46                                                                 ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-31 15:07 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

I was able to build the patch and am putting the kernel under stress now. 

> On 31. Oct 2024, at 09:04, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
>> On 31. Oct 2024, at 08:48, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>>> I will try, but I’m not sure. I don’t have a deep enough understanding to resolve some of the conflicts. In my previous mail I wasn’t sure which change would be the right one:
>>> I guess if 6.12 doesn’t have this line at all:
>>> -               atomic_set(&sh->count, 1);
>>> … then setting it to 0 is fine?
>>> +               atomic_set(&sh→count, 0);
>> 
>> My patch doesn't touch this field at all, why make such change? This is
>> not OK.
> 
> Yeah, patch didn’t think that’s OK either, that’s why I came back instead of trying to run that. ;)
> 
> Here’s the part of the patch I extracted from the earlier emails:
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 58f71c3e1368..b2a75a904209 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -2369,6 +2369,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
>               atomic_set(&sh->count, 1);
>               sh->raid_conf = conf;
>               sh->log_start = MaxSector;
> +               atomic_set(&sh->bitmap_counts, 0);
> 
> … aaand I just noticed that patch got confused and tried to apply your change 3 lines early, so I ended up with a conflict - correctly! :)
> 
>>> But again, I have no idea what’s actually going on there … ;)
>>> If you want I can try to wade through and give you a list of questions where the patch doesn’t obviously apply and you can let me know …
>> 
>> Perhaps can you try v6.12-rc5 directly? If not, I'll give a patch based
>> on v6.11 later.
> 
> So. I’d like to avoid running 6.12rc5 and if it isn’t too much trouble I’d appreciate a 6.11 patch, but now that I understood what’s wrong I can try to create it myself in the next days.
> 
> Cheers,
> Christian
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-31 15:07                                                               ` Christian Theune
@ 2024-10-31 19:46                                                                 ` Christian Theune
  2024-10-31 20:33                                                                   ` John Stoffel
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-10-31 19:46 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

the system has been running under stress for a while on 6.11.5 with the debugging. I have two observations so far:

1. The bitmap_counts are sometimes low and sometimes very high and intermingled like this:

Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf1db20000(29009381448+8) 7
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf9d6fbf80(29009382168+8) 5
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec896f20(29009381928+8) 4294967242
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 3
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfb083df40(29009381456+8) 7
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfc92a2fa0(29009381936+8) 5
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 2
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721c074f8df40(29009381464+8) 7
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfa3b2df40(29009381944+8) 5
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 1
Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec219fc0(29009381472+8) 4294967268
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 0
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967247
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967246
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967245
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967244
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967243
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967242
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967241
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 6
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 5
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 4
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 3
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 2
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 1
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 0
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 6
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 5
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 4
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 3
Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 2

Is the high number an indicator of something weird?

2. The printk seems expensive - it might be because there’s a serial console attached, I’m trying to detach it, but it seems things are stubborn in our config. 

This results in constant high CPU of the md127_raid6 (full single core at 100% accounted as sys time of course) as well as intermittent RCU stalls on printk.

I’m wondering whether this will limit the ability to reproduce this … 

I’ll keep the system running for a bit more and if it doesn’t lock up I’ll try harder to get rid of the serial console.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-31 19:46                                                                 ` Christian Theune
@ 2024-10-31 20:33                                                                   ` John Stoffel
  2024-11-01  2:02                                                                     ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: John Stoffel @ 2024-10-31 20:33 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:

> Hi,
> the system has been running under stress for a while on 6.11.5 with the debugging. I have two observations so far:

> 1. The bitmap_counts are sometimes low and sometimes very high and intermingled like this:

> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf1db20000(29009381448+8) 7
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf9d6fbf80(29009382168+8) 5
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec896f20(29009381928+8) 4294967242
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 3
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfb083df40(29009381456+8) 7
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfc92a2fa0(29009381936+8) 5
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 2
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721c074f8df40(29009381464+8) 7
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfa3b2df40(29009381944+8) 5
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 1
> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec219fc0(29009381472+8) 4294967268
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 0
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967247
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967246
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967245
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967244
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967243
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967242
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967241
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 6
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 5
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 4
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 3
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 2
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 1
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 0
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 6
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 5
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 4
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 3
> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 2

> Is the high number an indicator of something weird?

Is this number wrapping around and not being detected?  Maybe a
signed/unsigned issue?  Total wild ass guess on my part...

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-10-31 20:33                                                                   ` John Stoffel
@ 2024-11-01  2:02                                                                     ` Yu Kuai
  2024-11-01  7:56                                                                       ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-01  2:02 UTC (permalink / raw)
  To: John Stoffel, Christian Theune
  Cc: Yu Kuai, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/01 4:33, John Stoffel 写道:
>>>>>> "Christian" == Christian Theune <ct@flyingcircus.io> writes:
> 
>> Hi,
>> the system has been running under stress for a while on 6.11.5 with the debugging. I have two observations so far:
> 
>> 1. The bitmap_counts are sometimes low and sometimes very high and intermingled like this:
> 
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf1db20000(29009381448+8) 7
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bf9d6fbf80(29009382168+8) 5
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec896f20(29009381928+8) 4294967242

For this 'sh', can you grep "ff2721beec896f20" for the whole log and
show the results? Looks like bitmap_startwrite and endwrite is not
balanced for this 'sh', and this might be a real problem.

You can also do the same for some other 'sh'.

Thanks,
Kuai

>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 3
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfb083df40(29009381456+8) 7
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfc92a2fa0(29009381936+8) 5
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 2
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721c074f8df40(29009381464+8) 7
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721bfa3b2df40(29009381944+8) 5
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 1
>> Oct 31 20:41:27 barbrady09 kernel: __add_stripe_bio: md127: start ff2721beec219fc0(29009381472+8) 4294967268
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c108f26f20(29009374480+8) 0
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967247
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967246
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967245
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967244
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967243
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967242
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721beec030000(29009374488+8) 4294967241
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 6
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 5
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 4
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 3
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 2
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 1
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721bf21496f20(29009374496+8) 0
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 6
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 5
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 4
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 3
>> Oct 31 20:41:27 barbrady09 kernel: handle_stripe_clean_event: md127: end ff2721c1aa216f20(29009374504+8) 2
> 
>> Is the high number an indicator of something weird?
> 
> Is this number wrapping around and not being detected?  Maybe a
> signed/unsigned issue?  Total wild ass guess on my part...
> 
> .
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-01  2:02                                                                     ` Yu Kuai
@ 2024-11-01  7:56                                                                       ` Christian Theune
  2024-11-01  8:33                                                                         ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-01  7:56 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.

I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.

I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.

[  405.347345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22301786792+8) 4294967259
[  432.542465] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967260
[  432.542469] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967261
[  434.272964] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967262
[  434.273175] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967263
[  434.273189] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967264
[  434.273285] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967265
[  434.274063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967264
[  434.274066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967263
[  434.274070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967262
[  434.274073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967261
[  434.274078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967260
[  434.274083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967259
[  434.276609] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967260
[  434.278939] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967261
[  464.922354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967260
[  464.931833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967259
[  466.964557] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967260
[  466.964616] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967261
[  474.399930] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967262
[  474.451451] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967263
[  489.447079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967262
[  489.456574] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967261
[  489.466069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967260
[  489.475565] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967259
[  491.235517] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967260
[  491.235602] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967261
[  498.153108] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967262
[  498.156307] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967263
[  530.332619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967262
[  530.342110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967261
[  530.351595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967260
[  530.361082] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967259
[  535.176774] __add_stripe_bio: md127: start ff2721beec8c2fa0(24985208424+8) 4294967260
[  549.125326] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24985208424+8) 4294967259
[  549.635782] __add_stripe_bio: md127: start ff2721beec8c2fa0(25521770024+8) 4294967261
[  590.875593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967260
[  590.885081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967259
[  596.973863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967263
[  596.973866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967262
[  596.973869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967261
[  596.973871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967260
[  596.973881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967259
[  596.974557] __add_stripe_bio: md127: start ff2721beec8c2fa0(26325099752+8) 4294967260
[  637.646142] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26325099752+8) 4294967259
[  641.292887] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032741096+8) 4294967260
[  654.931195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032741096+8) 4294967259
[  654.933295] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967260
[  654.933570] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967261
[  654.935967] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967262
[  654.937411] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967263
[  683.008873] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967264
[  685.689494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967263
[  685.689496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967262
[  685.689498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967261
[  685.689499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967260
[  685.689501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967259
[  685.690260] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967260
[  685.692999] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967261
[  685.693119] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967262
[  685.693124] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967263
[  685.693427] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967264
[  685.693428] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967265
[  685.693517] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967266
[  685.693528] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967267
[  713.684556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967266
[  713.694044] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967265
[  713.703539] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967264
[  713.713026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967263
[  713.722512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967262
[  713.731996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967261
[  713.741480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967260
[  713.750962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967259
[  715.765954] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967260
[  715.766034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967261
[  715.766278] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967262
[  715.766305] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967263
[  715.766468] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967264
[  716.077253] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967265
[  731.258391] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
[  731.258401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967261
[  731.258584] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967262
[  731.258711] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967263
[  731.260991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967264
[  731.261318] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967265
[  731.261513] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967266
[  758.285428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967265
[  758.294912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967264
[  758.304396] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967263
[  758.313881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967262
[  758.323377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967261
[  758.332875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967260
[  758.342365] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
[  758.922198] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
[  780.668347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
[  780.957247] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967260
[  780.957393] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967261
[  780.957440] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967262
[  780.957616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967263
[  780.957675] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967264
[  780.957754] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967265
[  790.623177] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967266
[  828.374094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967265
[  828.383581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967264
[  828.393067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967263
[  828.402553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967262
[  828.412040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967261
[  828.421525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967260
[  828.431012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967259
[  830.477927] __add_stripe_bio: md127: start ff2721beec8c2fa0(13690207080+8) 4294967260
[  851.040449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(13690207080+8) 4294967259
[  851.762678] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967260
[  851.762837] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967261
[  851.762948] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967262
[  851.763032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967263
[  851.763068] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967264
[  851.763112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967265
[  851.763202] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967266
[  851.766405] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967267
[  851.768763] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967266
[  851.768766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967265
[  851.768768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967264
[  851.768770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967263
[  851.768773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967262
[  851.768775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967261
[  851.768778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967260
[  851.768780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967259
[  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967261
[  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967260
[  880.058982] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967262
[  880.059032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967263
[  880.059090] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967264
[  880.059140] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967265
[  880.059317] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967266
[  891.735497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967265
[  891.744974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967264
[  891.754455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967263
[  891.763939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967262
[  891.773422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967261
[  891.782975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967260
[  891.792469] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967259
[  897.108788] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967260
[  897.108789] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967261
[  897.108813] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967262
[  897.108823] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967263
[  903.693112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967264
[  904.663454] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967265
[  906.906830] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967260
[  906.908087] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967261
[  906.908508] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967262
[  906.910088] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967263
[  906.912093] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967264
[  906.912840] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967265
[  906.914294] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967266
[  906.914323] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967267
[  906.914806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967266
[  906.914808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967265
[  906.914809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967264
[  906.914810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967263
[  906.914811] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967262
[  906.914813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967261
[  906.914815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967260
[  906.914817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967259
[  934.849642] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861773736+8) 4294967261
[  934.854037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967260
[  934.854040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967259
[  934.855808] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967260
[  934.855945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967261
[  963.315203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967262
[  963.315320] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967263
[  963.315327] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967264
[  963.315499] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967265
[  982.866693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967264
[  982.876178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967263
[  982.885665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967262
[  982.895158] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967261
[  982.904644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967260
[  982.914129] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967259
[  990.121616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967260
[  990.121662] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967261
[  990.121768] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967262
[  990.121828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967263
[  990.121843] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967264
[ 1013.206756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967263
[ 1013.206757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967262
[ 1013.206758] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967261
[ 1013.206759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967260
[ 1013.224363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967259
[ 1032.134913] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967260
[ 1032.134928] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967261
[ 1032.135028] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967262
[ 1032.135078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967263
[ 1041.027196] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967264
[ 1041.027321] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967265
[ 1041.027485] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967266
[ 1057.623365] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967267
[ 1076.893035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967266
[ 1076.902520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967265
[ 1076.912004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967264
[ 1076.921490] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967263
[ 1076.930986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967262
[ 1076.940475] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967261
[ 1076.949962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967260
[ 1076.959446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967259
[ 1077.721459] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967260
[ 1077.721615] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967261
[ 1077.721706] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967262
[ 1077.721739] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967263
[ 1077.721765] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967264
[ 1110.833257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967263
[ 1110.842743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967262
[ 1110.852225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967261
[ 1110.861709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967260
[ 1110.871194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967259
[ 1112.052569] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967260
[ 1112.052666] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967261
[ 1112.052695] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967262
[ 1112.052727] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967263
[ 1112.052778] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967264
[ 1112.053637] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967265
[ 1112.053649] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967266
[ 1173.829738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967265
[ 1173.839223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967264
[ 1173.848709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967263
[ 1173.858195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967262
[ 1173.867683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967261
[ 1173.877167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967260
[ 1173.886654] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967259
[ 1176.428651] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967260
[ 1176.428940] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967263
[ 1176.428942] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967264
[ 1176.428939] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967262
[ 1176.428903] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967261
[ 1176.429040] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967265
[ 1191.700497] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967265
[ 1191.704134] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967266
[ 1191.704199] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967267
[ 1191.705804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967266
[ 1191.705808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967265
[ 1191.705809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967264
[ 1191.705812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967263
[ 1191.705815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967262
[ 1191.705817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967261
[ 1191.705819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967260
[ 1191.705821] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967259
[ 1191.810863] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792488+8) 4294967260
[ 1244.235788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27917293544+8) 4294967260
[ 1309.535319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967264
[ 1309.544810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967263
[ 1309.554303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967262
[ 1309.563787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967261
[ 1309.573272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967260
[ 1309.582759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967259
[ 1314.950362] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967262
[ 1314.950455] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967263
[ 1314.950457] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967264
[ 1314.950470] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967265
[ 1345.736319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967264
[ 1345.745804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967263
[ 1345.755290] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967262
[ 1345.764773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967261
[ 1345.774264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967260
[ 1345.783759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967259
[ 1346.823135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28462541160+8) 4294967259
[ 1346.824776] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967260
[ 1346.824799] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967261
[ 1346.824806] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967262
[ 1346.824922] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967263
[ 1346.825566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967264
[ 1373.560546] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814568+8) 4294967260
[ 1431.650090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861814568+8) 4294967259
[ 1468.944088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967265
[ 1468.953581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967264
[ 1468.963067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967263
[ 1468.972552] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967262
[ 1468.982036] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967261
[ 1468.991524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967260
[ 1469.001009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967259
[ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967261
[ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967260
[ 1474.904634] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967262
[ 1474.904752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967263
[ 1474.904798] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967264
[ 1474.904805] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967265
[ 1477.837716] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967266
[ 1479.836591] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967260
[ 1479.858896] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967261
[ 1479.859238] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967262
[ 1479.859525] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967263
[ 1479.859669] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967264
[ 1479.859897] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967265
[ 1479.860071] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967266
[ 1507.386887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967265
[ 1507.396375] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967264
[ 1507.405858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967263
[ 1507.415343] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967262
[ 1507.424831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967261
[ 1507.434322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967260
[ 1507.443826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967259
[ 1569.056325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967264
[ 1569.065837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967263
[ 1569.075325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967262
[ 1569.084816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967261
[ 1569.094308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967260
[ 1569.103801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967259
[ 1571.985752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967260
[ 1571.985858] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967261
[ 1571.985888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967263
[ 1571.985864] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967262
[ 1571.985962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967264
[ 1571.986450] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967265
[ 1582.882338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967264
[ 1582.882340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967263
[ 1582.882342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967262
[ 1582.882344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967261
[ 1582.882345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967260
[ 1582.882346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967259
[ 1582.884560] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967260
[ 1582.884860] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967261
[ 1582.884880] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967262
[ 1582.885034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967263
[ 1582.885126] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967264
[ 1582.885164] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967265
[ 1675.519030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967264
[ 1675.528518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967263
[ 1675.538016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967262
[ 1675.547513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967261
[ 1675.556999] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967260
[ 1675.566485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967259
[ 1682.250399] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967260
[ 1682.250639] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967261
[ 1682.250690] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967262
[ 1682.250718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967263
[ 1682.250974] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967264
[ 1682.251078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967265
[ 1682.251306] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967266
[ 1704.298207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967265
[ 1704.298211] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967264
[ 1704.298214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967263
[ 1704.298216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967262
[ 1704.298218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967261
[ 1704.298220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967260
[ 1704.298222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967259
[ 1704.299566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967260
[ 1704.299718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967261
[ 1704.299758] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967262
[ 1704.299834] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967263
[ 1704.299888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967264
[ 1704.304001] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967265
[ 1704.304132] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967266
[ 1772.283712] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967264
[ 1772.293200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967263
[ 1772.302685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967262
[ 1772.312169] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967261
[ 1772.321652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967260
[ 1772.331135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967259
[ 1776.549687] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967260
[ 1776.549697] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967261
[ 1776.549898] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967263
[ 1776.549945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967264
[ 1776.549962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967265
[ 1776.549828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967262
[ 1776.550033] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967266
[ 1776.550080] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967267
[ 1781.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967266
[ 1782.080461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967265
[ 1782.199404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967264
[ 1782.318346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967263
[ 1782.438150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967262
[ 1782.557963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967261
[ 1782.677762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967260
[ 1782.797570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967259
[ 1786.992892] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967261
[ 1786.992878] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967260
[ 1786.993259] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967262
[ 1786.993401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967263
[ 1786.993449] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967264
[ 1795.858021] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967266
[ 1795.858009] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967265
[ 1795.858180] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967267
[ 1805.164880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967266
[ 1805.174370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967265
[ 1805.183853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967264
[ 1805.193339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967263
[ 1805.202828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967262
[ 1805.212314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967261
[ 1805.221800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967260
[ 1805.231291] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967259
[ 1807.730968] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967261
[ 1807.730937] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967260
[ 1807.731203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967262
[ 1807.731267] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967263
[ 1807.731406] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967264
[ 1807.731542] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967265
[ 1807.731764] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967266
[ 1893.800189] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967263
[ 1893.809691] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967262
[ 1893.819186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967261
[ 1893.828675] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967260
[ 1893.838165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967259
[ 1897.304170] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967260
[ 1897.304333] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967261
[ 1897.304579] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967262
[ 1897.304721] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967263
[ 1897.304812] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967264
[ 1897.304978] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967265
[ 1910.883901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861873064+8) 4294967259
[ 1910.888991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967262
[ 1910.888995] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967264
[ 1910.888988] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967261
[ 1910.888993] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967263
[ 1910.888986] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967260
[ 1990.952649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967264
[ 1990.952651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967263
[ 1990.952653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967262
[ 1990.952655] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967261
[ 1990.952657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967260
[ 1990.952659] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967259
[ 1990.957010] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967260
[ 1990.957011] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967261
[ 1990.957016] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967262
[ 1990.957020] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967263
[ 2021.437780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967262
[ 2021.437782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967261
[ 2021.437783] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967260
[ 2021.437785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967259
[ 2021.442407] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967260
[ 2021.443820] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967261
[ 2045.539668] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967262
[ 2045.540142] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967263
[ 2045.540232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967264
[ 2045.540262] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967265
[ 2050.125201] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967266
[ 2057.875279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967265
[ 2057.884767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967264
[ 2057.894262] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967263
[ 2057.903753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967262
[ 2057.913237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967261
[ 2057.922722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967260
[ 2057.932205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967259
[ 2059.233074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967260
[ 2059.233116] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967261
[ 2059.233120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967262
[ 2059.233171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967263
[ 2059.233632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967264
[ 2059.233684] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967265
[ 2059.235328] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967266
[ 2059.235336] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967267
[ 2059.238433] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967266
[ 2059.238435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967265
[ 2059.238436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967264
[ 2059.238437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967263
[ 2059.238439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967262
[ 2059.238440] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967261
[ 2059.238441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967260
[ 2059.238443] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967259
[ 2090.648331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967260
[ 2090.648399] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967261
[ 2090.648402] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967262
[ 2090.648414] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967263
[ 2090.648428] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967264
[ 2090.648540] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967265
[ 2090.651017] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967266
[ 2090.700177] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967267
[ 2118.173167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967266
[ 2118.182657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967265
[ 2118.192147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967264
[ 2118.201638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967263
[ 2118.211138] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967262
[ 2118.220626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967261
[ 2118.230111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967260
[ 2118.239602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967259
[ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967260
[ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967261
[ 2119.232691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967262
[ 2119.232707] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967263
[ 2119.232880] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967264
[ 2119.232926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967265
[ 2119.232990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967266
[ 2119.233054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967267
[ 2146.417917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967266
[ 2151.981747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967265
[ 2152.121412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967264
[ 2152.261047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967263
[ 2152.400699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967262
[ 2152.540356] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967261
[ 2152.680005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967260
[ 2152.819653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967259
[ 2157.037069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967260
[ 2157.037363] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967261
[ 2157.037405] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967262
[ 2157.037421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967263
[ 2157.037446] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967264
[ 2214.237201] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967263
[ 2214.246685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967262
[ 2214.256174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967261
[ 2214.265665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967260
[ 2214.275152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967259
[ 2220.022835] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967260
[ 2220.022859] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967261
[ 2220.022876] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967262
[ 2220.022912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967263
[ 2220.023258] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967265
[ 2220.023161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967264
[ 2243.495792] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130495656+8) 4294967264
[ 2272.045830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967265
[ 2272.045833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967264
[ 2272.045835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967263
[ 2272.045837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967262
[ 2272.045838] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967261
[ 2272.045840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967260
[ 2272.045841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967259
[ 2302.557785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967266
[ 2302.557787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967265
[ 2302.557789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967264
[ 2302.557791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967263
[ 2302.557793] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967262
[ 2302.557796] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967261
[ 2302.557797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967260
[ 2302.557799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967259
[ 2302.561904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967260
[ 2302.561926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967261
[ 2302.561957] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967263
[ 2302.561933] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967262
[ 2302.562006] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967264
[ 2302.562203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967265
[ 2302.562232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967266
[ 2302.562597] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967267
[ 2329.647721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967266
[ 2329.738196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967265
[ 2329.828677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967264
[ 2329.919153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967263
[ 2330.009644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967262
[ 2330.100125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967261
[ 2330.190603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967260
[ 2330.281085] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967259
[ 2332.172188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967265
[ 2332.172198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967264
[ 2332.172233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967263
[ 2332.172242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967262
[ 2332.172255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967261
[ 2332.172264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967260
[ 2332.172278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967259
[ 2332.178323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967262
[ 2332.178317] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967261
[ 2332.178310] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967260
[ 2332.178326] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967263
[ 2332.178394] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967264
[ 2332.178580] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967265
[ 2332.178600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967266
[ 2332.178697] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967267
[ 2358.527771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967266
[ 2358.657096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967265
[ 2358.786383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967264
[ 2358.915693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967263
[ 2359.044994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967262
[ 2359.174296] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967261
[ 2359.303592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967260
[ 2359.432875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967259
[ 2367.401519] __add_stripe_bio: md127: start ff2721beec8c2fa0(27111972904+8) 4294967260
[ 2367.410065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27111972904+8) 4294967259
[ 2399.790368] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967260
[ 2403.855440] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967261
[ 2403.855574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967262
[ 2403.855636] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967263
[ 2403.855687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967264
[ 2478.513548] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967263
[ 2478.523034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967262
[ 2478.532518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967261
[ 2478.542003] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967260
[ 2478.551487] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967259
[ 2483.294420] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967264
[ 2483.294422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967263
[ 2483.294423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967262
[ 2483.294425] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967261
[ 2483.294426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967260
[ 2483.294428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967259
[ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967260
[ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967261
[ 2515.139584] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967262
[ 2515.139592] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967264
[ 2515.139587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967263
[ 2515.139593] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967265
[ 2515.139795] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967266
[ 2556.408113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967264
[ 2556.417600] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967263
[ 2556.427088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967262
[ 2556.436578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967261
[ 2556.446066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967260
[ 2556.455551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967259
[ 2559.815547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967260
[ 2559.815563] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967261
[ 2559.815793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967262
[ 2559.815874] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967263
[ 2559.816031] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967264
[ 2568.256111] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967265
[ 2568.256157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967266
[ 2619.422458] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967265
[ 2619.431942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967264
[ 2619.441449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967263
[ 2619.450948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967262
[ 2619.460435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967261
[ 2619.469921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967260
[ 2619.479412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967259
[ 2633.472211] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967261
[ 2633.472205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967260
[ 2633.472305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967262
[ 2633.472427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967264
[ 2633.472417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967263
[ 2633.472587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967265
[ 2633.539700] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967266
[ 2661.223491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967265
[ 2661.223493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967264
[ 2661.223494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967263
[ 2661.223496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967262
[ 2661.223497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967261
[ 2661.223498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967260
[ 2661.223500] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967259
[ 2661.228040] __add_stripe_bio: md127: start ff2721beec8c2fa0(539699176+8) 4294967260
[ 2706.576782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(539699176+8) 4294967259
[ 2709.937157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967260
[ 2709.937171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967261
[ 2709.937316] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967262
[ 2709.937650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967263
[ 2709.937717] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967264
[ 2709.937724] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967265
[ 2709.937725] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967266
[ 2709.937737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967267
[ 2721.462599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967266
[ 2721.610877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967265
[ 2721.759136] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967264
[ 2721.907488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967263
[ 2722.055771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967262
[ 2722.204050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967261
[ 2722.352331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967260
[ 2722.500592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967259
[ 2724.772625] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967260
[ 2724.772751] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967261
[ 2724.772938] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967262
[ 2724.772988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967263
[ 2724.773119] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967264
[ 2754.673788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967263
[ 2754.673790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967262
[ 2754.673791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967261
[ 2754.673794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967260
[ 2754.673795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967259
[ 2785.394053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967261
[ 2785.394056] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967263
[ 2785.394059] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967265
[ 2785.394054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967262
[ 2785.394058] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967264
[ 2785.394050] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967260
[ 2785.463401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967266
[ 2785.472247] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967267
[ 2787.076731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967266
[ 2787.076732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967265
[ 2787.076734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967264
[ 2787.076735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967263
[ 2787.076736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967262
[ 2787.076738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967261
[ 2787.076739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967260
[ 2787.076740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967259
[ 2808.905214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967265
[ 2808.905333] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967266
[ 2808.905388] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967267
[ 2808.906939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967266
[ 2808.906941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967265
[ 2808.906943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967264
[ 2808.906944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967263
[ 2808.906946] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967262
[ 2808.906947] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967261
[ 2808.906950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967260
[ 2808.906952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967259
[ 2836.311276] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130552808+8) 4294967260
[ 2854.798417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
[ 2856.543067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967264
[ 2856.543070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967263
[ 2856.543073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967262
[ 2856.543075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967261
[ 2856.543077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967260
[ 2856.543079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
[ 2856.546312] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967260
[ 2856.546314] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967261
[ 2856.546421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967262
[ 2856.546509] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967263
[ 2856.546926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967264
[ 2886.489550] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967265
[ 2886.489595] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967266
[ 2886.489713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967267
[ 2897.617989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967266
[ 2897.627477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967265
[ 2897.636962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967264
[ 2897.646444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967263
[ 2897.655920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967262
[ 2897.665409] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967261
[ 2897.674910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967260
[ 2897.684404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967259
[ 2899.844282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967266
[ 2899.844316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967265
[ 2899.844354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967264
[ 2899.844382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967263
[ 2899.844423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967262
[ 2899.844462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967261
[ 2899.844516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967260
[ 2899.844570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967259
[ 2899.845690] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967260
[ 2899.845966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967261
[ 2899.846019] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967262
[ 2899.846062] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967263
[ 2899.846186] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967264
[ 2899.846207] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967265
[ 2899.846260] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967266
[ 2952.891498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967265
[ 2952.900984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967264
[ 2952.910478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967263
[ 2952.919966] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967262
[ 2952.929461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967261
[ 2952.938950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967260
[ 2952.948431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967259
[ 2955.316494] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967260
[ 2955.316704] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967261
[ 2955.316809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967262
[ 2955.316988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967263
[ 2955.317105] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967264
[ 2987.714377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967263
[ 2987.714379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967262
[ 2987.714381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967261
[ 2987.714383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967260
[ 2987.714385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967259
[ 2987.719137] __add_stripe_bio: md127: start ff2721beec8c2fa0(1092459240+8) 4294967260
[ 3047.110275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967264
[ 3047.110276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967263
[ 3047.110277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967262
[ 3047.110278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967261
[ 3047.110279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967260
[ 3047.110281] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967259
[ 3047.112501] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967260
[ 3047.112711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967261
[ 3047.112750] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967262
[ 3070.186991] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967263
[ 3070.187120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967264
[ 3110.127145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967263
[ 3110.136633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967262
[ 3110.146121] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967261
[ 3110.155611] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967260
[ 3110.165103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967259
[ 3113.362512] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967260
[ 3113.362527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967261
[ 3113.362737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967262
[ 3113.362788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967264
[ 3113.362772] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967263
[ 3113.363524] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967265
[ 3181.040787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967264
[ 3181.050278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967263
[ 3181.059767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967262
[ 3181.069267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967261
[ 3181.078760] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967260
[ 3181.088248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967259
[ 3190.006331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967260
[ 3190.006353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967261
[ 3190.006523] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967262
[ 3190.006526] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967263
[ 3190.006576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967264
[ 3190.006604] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967265
[ 3190.006676] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967266
[ 3222.489157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130590248+8) 4294967259
[ 3222.494665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967260
[ 3222.494810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967261
[ 3222.495400] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967262
[ 3222.495460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967263
[ 3222.496203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967264
[ 3222.496266] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967265
[ 3249.542100] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967264
[ 3249.542102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967263
[ 3249.542103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967262
[ 3249.542105] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967261
[ 3249.542107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967260
[ 3249.542109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967259
[ 3249.547575] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130590440+8) 4294967260
[ 3298.070385] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967260
[ 3298.070466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967261
[ 3298.070767] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967262
[ 3298.070824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967263
[ 3298.070896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967264
[ 3351.191989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967263
[ 3351.201478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967262
[ 3351.210961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967261
[ 3351.220447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967260
[ 3351.229931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967259
[ 3354.186090] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967260
[ 3354.186174] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967261
[ 3354.186453] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967262
[ 3354.186600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967263
[ 3354.186610] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967264
[ 3354.186666] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967265
[ 3354.186682] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967266
[ 3395.962921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967265
[ 3395.972408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967264
[ 3395.981893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967263
[ 3395.991379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967262
[ 3396.000863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967261
[ 3396.010344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967260
[ 3396.019828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967259
[ 3397.783940] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967260
[ 3397.783984] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967261
[ 3397.784015] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967262
[ 3397.784039] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967263
[ 3397.784102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967264
[ 3397.784112] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967265
[ 3397.784206] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967266
[ 3397.784239] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967267
[ 3407.297456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967266
[ 3407.515478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967265
[ 3407.733622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967264
[ 3407.952516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967263
[ 3408.171430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967262
[ 3408.390355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967261
[ 3408.609270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967260
[ 3408.828226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967259
[ 3410.118755] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967260
[ 3410.118912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967261
[ 3410.119044] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967262
[ 3410.119183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967263
[ 3410.119190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967264
[ 3410.119398] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967265
[ 3474.070919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967264
[ 3474.080400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967263
[ 3474.089879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967262
[ 3474.099370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967261
[ 3474.108856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967260
[ 3474.118342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967259
[ 3477.161290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967261
[ 3477.161249] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967260
[ 3477.161478] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967262
[ 3477.161505] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967263
[ 3487.704510] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967264
[ 3487.704567] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967265
[ 3510.992084] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967264
[ 3510.992086] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967263
[ 3510.992088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967262
[ 3510.992089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967261
[ 3510.992090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967260
[ 3510.992091] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967259
[ 3510.992993] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
[ 3510.993007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
[ 3550.083557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
[ 3550.083561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
[ 3555.089305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
[ 3555.089386] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
[ 3555.089618] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967262
[ 3555.089635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967263
[ 3555.089655] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967264
[ 3572.781054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967263
[ 3572.790547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967262
[ 3572.800034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967261
[ 3572.809523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
[ 3572.819005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
[ 3589.172647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967266
[ 3589.172750] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967265
[ 3589.172860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967264
[ 3589.172991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967263
[ 3589.173151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967262
[ 3589.173314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967261
[ 3589.173391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967260
[ 3589.173461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967259
[ 3589.175972] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743848+8) 4294967260
[ 3621.769395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032743848+8) 4294967259
[ 3623.304014] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399975272+8) 4294967260
[ 3686.974958] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399986216+8) 4294967267
[ 3722.135513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967266
[ 3722.144998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967265
[ 3722.154484] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967264
[ 3722.163972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967263
[ 3722.173455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967262
[ 3722.182939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967261
[ 3722.192426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967260
[ 3722.201912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967259
[ 3729.633922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967260
[ 3729.634199] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967261
[ 3729.634228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967262
[ 3729.634351] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967263
[ 3729.634466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967264
[ 3737.013926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967265
[ 3737.016635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967266
[ 3761.542817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967265
[ 3761.542819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967264
[ 3761.542820] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967263
[ 3761.542822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967262
[ 3761.542824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967261
[ 3761.542826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967260
[ 3761.542827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967259
[ 3761.545145] __add_stripe_bio: md127: start ff2721beec8c2fa0(4298178536+8) 4294967260
[ 3781.220916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(4298178536+8) 4294967259
[ 3816.363850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967261
[ 3816.363852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967260
[ 3816.363853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967259
[ 3816.366775] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967260
[ 3816.367295] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967261
[ 3816.367301] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967262
[ 3816.367544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967263
[ 3816.367693] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967264
[ 3816.368092] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967265
[ 3843.302810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967266
[ 3869.720089] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967261
[ 3869.720097] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967262
[ 3869.720194] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967263
[ 3869.720213] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967264
[ 3869.725214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967265
[ 3911.191478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967264
[ 3911.200970] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967263
[ 3911.210456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967262
[ 3911.219942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967261
[ 3911.229426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967260
[ 3911.238911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967259
[ 3914.293028] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967264
[ 3914.293031] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967263
[ 3914.293033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967262
[ 3914.293035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967261
[ 3914.293038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967260
[ 3914.293040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967259
[ 3914.295622] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967261
[ 3914.295641] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967262
[ 3914.295643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967263
[ 3914.295621] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967260
[ 3914.295871] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967264
[ 3999.621383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967263
[ 3999.630885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967262
[ 3999.640370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967261
[ 3999.649865] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967260
[ 3999.659351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967259
[ 4004.391868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967260
[ 4004.391913] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967261
[ 4004.392117] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967262
[ 4004.392564] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967263
[ 4004.392579] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967264
[ 4004.392650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967265
[ 4004.392858] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967266
[ 4074.054694] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967265
[ 4074.064183] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967264
[ 4074.073668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967263
[ 4074.083155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967262
[ 4074.092647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967261
[ 4074.102149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967260
[ 4074.111642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967259
[ 4114.890444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967265
[ 4114.890445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967264
[ 4114.890446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967263
[ 4114.890447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967262
[ 4114.890448] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967261
[ 4114.890449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967260
[ 4114.890450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967259
[ 4114.894014] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743784+8) 4294967260
[ 4137.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967260
[ 4137.422134] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967261
[ 4137.422222] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967262
[ 4137.422380] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967263
[ 4137.422447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967264
[ 4137.422527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967265
[ 4137.422809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967266
[ 4195.441417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967265
[ 4195.450902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967264
[ 4195.460386] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967263
[ 4195.469873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967262
[ 4195.479360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967261
[ 4195.488848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967260
[ 4195.498336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967259
[ 4205.104432] __add_stripe_bio: md127: start ff2721beec8c2fa0(6458132264+8) 4294967260
[ 4270.444837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6458132264+8) 4294967259
[ 4282.274860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967265
[ 4282.274879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967264
[ 4282.274897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967263
[ 4282.274916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967262
[ 4282.274936] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967261
[ 4282.274955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967260
[ 4282.274975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967259
[ 4282.276460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967260
[ 4282.276797] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967261
[ 4282.276964] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967262
[ 4282.277061] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967263
[ 4282.277143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967264
[ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967266
[ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967265
[ 4282.277271] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967267
[ 4282.321155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967266
[ 4282.387435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967265
[ 4282.448733] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967264
[ 4282.448742] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967263
[ 4282.448751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967262
[ 4282.448759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967261
[ 4282.448767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967260
[ 4282.448775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967259
[ 4315.061841] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967260
[ 4315.061855] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967262
[ 4315.061844] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967261
[ 4315.061924] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967263
[ 4315.061976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967264
[ 4315.063503] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967265
[ 4382.511212] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967264
[ 4382.520702] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967263
[ 4382.530180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967262
[ 4382.539665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967261
[ 4382.549163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967260
[ 4382.558657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967259
[ 4387.176732] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967260
[ 4387.176821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967261
[ 4387.176898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967262
[ 4387.177030] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967263
[ 4387.177229] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967264
[ 4387.177270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967265
[ 4457.957710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967264
[ 4457.957715] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967263
[ 4457.957719] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967262
[ 4457.957723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967261
[ 4457.957727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967260
[ 4457.957731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967259
[ 4457.961270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967260
[ 4457.961406] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967261
[ 4457.961619] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967262
[ 4457.961651] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967264
[ 4457.961806] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967265
[ 4457.961645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967263
[ 4485.589613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967264
[ 4485.589615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967263
[ 4485.589616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967262
[ 4485.589617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967261
[ 4485.589618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967260
[ 4485.589619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967259
[ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967260
[ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967261
[ 4485.593288] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967262
[ 4485.593416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967263
[ 4485.593532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967264
[ 4485.593652] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967265
[ 4485.593678] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967266
[ 4485.850480] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967267
[ 4515.537222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967266
[ 4515.537223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967265
[ 4515.537224] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967264
[ 4515.537226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967263
[ 4515.537227] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967262
[ 4515.537228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967261
[ 4515.537229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967260
[ 4515.537231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967259
[ 4515.539155] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967260
[ 4515.539253] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967261
[ 4515.539324] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967262
[ 4515.539513] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967263
[ 4515.539522] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967264
[ 4543.939187] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967265
[ 4567.298898] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967264
[ 4567.308382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967263
[ 4567.317870] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967262
[ 4567.327355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967261
[ 4567.336851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967260
[ 4567.346342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967259
[ 4574.769978] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967260
[ 4574.770644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967261
[ 4574.770713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967262
[ 4585.659234] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967263
[ 4585.659638] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967264
[ 4585.659851] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967265
[ 4628.062519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967264
[ 4628.062521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967263
[ 4628.062522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967262
[ 4628.062524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967261
[ 4628.062525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967260
[ 4628.062526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967259
[ 4628.067547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967260
[ 4628.067553] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967262
[ 4628.067549] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967261
[ 4628.067556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967263
[ 4628.067558] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967264
[ 4628.067643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967265
[ 4628.067650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967266
[ 4655.735972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967266
[ 4655.738016] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967267
[ 4655.740269] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967266
[ 4655.740270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967265
[ 4655.740271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967264
[ 4655.740273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967263
[ 4655.740274] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967262
[ 4655.740275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967261
[ 4655.740277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967260
[ 4655.740278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967259
[ 4655.744826] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967260
[ 4655.745042] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967261
[ 4655.745074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967262
[ 4655.745162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967263
[ 4684.693786] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967264
[ 4707.657198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967263
[ 4707.666685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967262
[ 4707.676172] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967261
[ 4707.685664] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967260
[ 4707.695155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967259
[ 4714.104353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967260
[ 4714.104370] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967261
[ 4714.104532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967262
[ 4714.104556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967263
[ 4714.104738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967264
[ 4714.104749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967265
[ 4714.104870] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967266
[ 4767.415146] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967265
[ 4767.424642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967264
[ 4767.434126] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967263
[ 4767.443610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967262
[ 4767.453092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967261
[ 4767.462576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967260
[ 4767.472063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967259
[ 4772.506161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967260
[ 4772.506215] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967261
[ 4772.506341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967262
[ 4772.506665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967263
[ 4772.506877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967264
[ 4772.507010] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967265
[ 4788.406891] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967266
[ 4841.522571] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967260
[ 4841.523093] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967261
[ 4907.596094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967260
[ 4907.596096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967259
[ 4907.596899] __add_stripe_bio: md127: start ff2721beec8c2fa0(27651747176+8) 4294967260
[ 4973.644083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27651747176+8) 4294967259
[ 4983.401423] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967260
[ 4983.401434] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967261
[ 4983.401439] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967262
[ 4983.412449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
[ 4983.412450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
[ 4983.412452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
[ 5009.844830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967263
[ 5009.844831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967262
[ 5009.844832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
[ 5009.844834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
[ 5009.844835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
[ 5036.841498] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967260
[ 5036.841516] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967261
[ 5036.841589] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967262
[ 5036.841711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967263
[ 5036.841866] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967264
[ 5036.842021] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967265
[ 5065.748527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967264
[ 5065.758011] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967263
[ 5065.767510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967262
[ 5065.776985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967261
[ 5065.786473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967260
[ 5065.795964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967259
[ 5069.198341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967260
[ 5069.198447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967261
[ 5069.198475] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967262
[ 5069.198565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967263
[ 5069.198600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967264
[ 5069.198657] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967265
[ 5069.198719] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967266
[ 5069.198749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967267
[ 5158.739304] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967265
[ 5158.748804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967264
[ 5158.758295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967263
[ 5158.767784] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967262
[ 5158.777267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967261
[ 5158.786749] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967260
[ 5158.796231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967259
[ 5174.398776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967260
[ 5174.398877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967261
[ 5174.398898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967262
[ 5174.398990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967263
[ 5174.399068] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967264
[ 5174.399165] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967265
[ 5215.123596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967264
[ 5215.133088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967263
[ 5215.142587] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967262
[ 5215.152066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967261
[ 5215.161549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967260
[ 5215.171035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967259
[ 5223.954743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967260
[ 5223.954779] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967261
[ 5223.954951] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967262
[ 5223.955007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967263
[ 5223.955223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967264
[ 5223.955228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967265
[ 5223.955230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967266
[ 5223.955472] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967267
[ 5259.738014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967266
[ 5260.031050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967265
[ 5260.324115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967264
[ 5260.617186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967263
[ 5260.910270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967262
[ 5261.203354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967261
[ 5261.496406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967260
[ 5261.789480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967259
[ 5265.172862] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967260
[ 5265.173244] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967261
[ 5265.173322] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967262
[ 5265.173763] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967263
[ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967265
[ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967264
[ 5265.173928] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967266
[ 5265.173973] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967267
[ 5294.960280] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967266
[ 5295.249813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967265
[ 5295.539340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967264
[ 5295.829752] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967263
[ 5296.120184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967262
[ 5296.410623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967261
[ 5296.701004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967260
[ 5296.991418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967259
[ 5298.908411] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967260
[ 5298.908458] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967261
[ 5298.908544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967262
[ 5298.908650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967263
[ 5298.908710] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967264
[ 5298.909051] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967265
[ 5413.856013] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667058856+8) 4294967266
[ 5446.671677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967265
[ 5446.671679] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967264
[ 5446.671680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967263
[ 5446.671681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967262
[ 5446.671682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967261
[ 5446.671683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967260
[ 5446.671684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967259
[ 5479.015532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967260
[ 5479.015632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967261
[ 5479.015683] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967262
[ 5479.015735] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967263
[ 5492.731793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967264
[ 5492.731911] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967265
[ 5506.007986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967264
[ 5506.007988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967263
[ 5506.007991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967262
[ 5506.007994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967261
[ 5506.007997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967260
[ 5506.008000] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967259
[ 5506.011738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967260
[ 5506.011854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967261
[ 5506.011861] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967262
[ 5506.012133] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967263
[ 5506.012143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967264
[ 5555.890832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967263
[ 5555.900322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967262
[ 5555.909852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967261
[ 5555.919336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967260
[ 5555.928823] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967259
[ 5574.002280] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967260
[ 5574.002313] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967261
[ 5574.002403] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967262
[ 5574.002468] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967263
[ 5574.002561] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967264
[ 5574.002645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967265
[ 5606.796975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967264
[ 5606.796977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967263
[ 5606.796978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967262
[ 5606.796979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967261
[ 5606.796981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967260
[ 5606.796982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967259
[ 5606.798208] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967260
[ 5606.798527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967261
[ 5606.798585] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967262
[ 5606.798607] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967263
[ 5606.803857] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967264
[ 5606.804282] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967265
[ 5639.962722] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967266
[ 5652.645345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967265
[ 5652.654833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967264
[ 5652.664323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967263
[ 5652.673815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967262
[ 5652.683294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967261
[ 5652.692781] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967260
[ 5654.603101] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967259
[ 5654.613230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967260
[ 5654.613572] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967261
[ 5654.613687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967262
[ 5654.613814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967263
[ 5654.614055] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967264
[ 5683.045381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967263
[ 5683.045383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967262
[ 5683.045385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967261
[ 5683.045387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967260
[ 5683.045388] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967259
[ 5683.048586] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967260
[ 5683.048965] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967261
[ 5683.049073] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967262
[ 5683.049140] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967263
[ 5683.049162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967264
[ 5683.049196] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967265
[ 5683.049256] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967266
[ 5683.474474] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967267
[ 5723.855633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967266
[ 5724.027114] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967265
[ 5724.198621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967264
[ 5724.370117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967263
[ 5724.541614] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967262
[ 5724.713065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967261
[ 5724.884518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967260
[ 5725.055989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967259
[ 5730.407790] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967260
[ 5730.407821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967261
[ 5730.407937] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967263
[ 5730.407896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967262
[ 5730.408159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967265
[ 5730.408143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967264
[ 5730.408353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967266
[ 5758.122868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967260
[ 5758.123578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967261
[ 5758.123627] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967262
[ 5758.130240] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967263
[ 5758.130420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967264
[ 5758.130534] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967265
[ 5846.663594] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967264
[ 5846.673081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967263
[ 5846.682568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967262
[ 5846.692061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967261
[ 5846.701549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967260
[ 5846.711038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967259
[ 5911.098887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967265
[ 5911.108378] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967264
[ 5911.117873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967263
[ 5911.127372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967262
[ 5911.136857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967261
[ 5911.146341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967260
[ 5911.155824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967259
[ 5928.422955] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967260
[ 5928.422960] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967261
[ 5928.422966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967262
[ 5928.422974] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967264
[ 5928.422972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967263
[ 6014.681387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967265
[ 6014.690871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967264
[ 6014.700360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967263
[ 6014.709859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967262
[ 6014.719355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967261
[ 6014.728844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967260
[ 6014.738332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967259
[ 6056.379708] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967265
[ 6056.389200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967264
[ 6056.398687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967263
[ 6056.408178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967262
[ 6056.417667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967261
[ 6056.427155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967260
[ 6056.436640] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967259
[ 6063.789506] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967260
[ 6063.789738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967261
[ 6063.789989] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967262
[ 6063.790034] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967263
[ 6063.790190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967264
[ 6063.790287] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967265
[ 6078.230963] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967266
[ 6105.353125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967265
[ 6105.362612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967264
[ 6105.372093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967263
[ 6105.381577] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967262
[ 6105.391064] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967261
[ 6105.400555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967260
[ 6105.410041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967259
[ 6128.587354] __add_stripe_bio: md127: start ff2721beec8c2fa0(27920223080+8) 4294967260
[ 6201.907683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27920223080+8) 4294967259
[ 6210.113728] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032744680+8) 4294967259
[ 6283.753223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967264
[ 6283.762710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967263
[ 6283.772194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967262
[ 6283.781677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967261
[ 6283.791163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967260
[ 6283.800647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967259
[ 6294.185205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967260
[ 6294.185349] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967261
[ 6354.564956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967266
[ 6354.565008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967265
[ 6354.565050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967264
[ 6354.565103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967263
[ 6354.565143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967262
[ 6354.565198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967261
[ 6354.565250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967260
[ 6354.565295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967259
[ 6354.571582] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967260
[ 6354.571613] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967261
[ 6354.571614] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967262
[ 6354.572095] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967263
[ 6381.572101] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967264
[ 6381.572150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967265
[ 6381.572462] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967266
[ 6417.668789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967265
[ 6417.678285] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967264
[ 6417.687773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967263
[ 6417.697266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967262
[ 6417.706822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967261
[ 6417.716318] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967260
[ 6417.725807] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967259
[ 6442.242691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967260
[ 6442.242776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967261
[ 6442.242901] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967262
[ 6442.242998] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967263
[ 6442.243060] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967264
[ 6442.243109] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967265
[ 6487.368252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967264
[ 6487.368256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967263
[ 6487.384984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967262
[ 6487.401709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967261
[ 6487.418441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967260
[ 6487.418447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967259
[ 6512.350543] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967260
[ 6512.351290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967261
[ 6512.351395] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967262
[ 6512.351419] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967263
[ 6512.351565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967264
[ 6512.351578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967265
[ 6512.351611] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967266
[ 6558.339111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967265
[ 6558.339113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967264
[ 6558.339115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967263
[ 6558.339118] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967262
[ 6558.339120] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967261
[ 6558.339122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967260
[ 6558.339123] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967259
[ 6604.012612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967265
[ 6604.012615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967264
[ 6604.012617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967263
[ 6604.012619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967262
[ 6604.012622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967261
[ 6604.012624] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967260
[ 6604.012626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967259
[ 6636.116612] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967260
[ 6636.117018] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967261
[ 6636.117064] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967262
[ 6636.117191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967263
[ 6636.117217] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967265
[ 6636.117204] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967264
[ 6636.117365] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967266
[ 6697.656762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967265
[ 6697.666250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967264
[ 6697.675731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967263
[ 6697.685213] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967262
[ 6697.694703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967261
[ 6697.704188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967260
[ 6697.713685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967259
[ 6699.748818] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967260
[ 6699.749045] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967261
[ 6699.749350] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967262
[ 6699.749488] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967264
[ 6699.749487] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967263
[ 6699.749673] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967265
[ 6700.169570] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967266
[ 6714.982644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967264
[ 6714.982749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967265
[ 6752.225916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967264
[ 6752.235410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967263
[ 6752.244901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967262
[ 6752.254387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967261
[ 6752.263875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967260
[ 6752.273361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967259
[ 6763.509990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967260
[ 6763.510135] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967261
[ 6763.510150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967262
[ 6763.510183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967263
[ 6763.510242] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967264
[ 6763.510270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967265
[ 6763.512906] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967266
[ 6823.022727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967265
[ 6823.022730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967264
[ 6823.022731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967263
[ 6823.022734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967262
[ 6823.022735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967261
[ 6823.022736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967260
[ 6823.022738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967259
[ 6823.024701] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967260
[ 6823.024824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967261
[ 6823.025069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967263
[ 6823.024976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967262
[ 6823.025323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967264
[ 6823.025427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967265
[ 6929.234367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967264
[ 6929.243863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967263
[ 6929.253358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967262
[ 6929.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967261
[ 6929.272333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967260
[ 6929.281822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967259
[ 6930.403685] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967260
[ 6930.403904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967261
[ 6930.404088] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967262
[ 6930.404223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967263
[ 6930.404286] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967264
[ 6930.404292] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967265
[ 6994.814514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967264
[ 6994.824001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967263
[ 6994.833494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967262
[ 6994.842983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967261
[ 6994.852473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967260
[ 6994.861960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967259
[ 6997.854357] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935572712+8) 4294967265
[ 7031.426286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967264
[ 7031.435774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967263
[ 7031.452511] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967262
[ 7031.468341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967261
[ 7031.484182] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967260
[ 7039.434351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967259
[ 7045.236931] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967260
[ 7045.237482] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967261
[ 7045.237696] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967262
[ 7045.237743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967263
[ 7056.937353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967264
[ 7056.937578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967265
[ 7056.940551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967264
[ 7056.940553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967263
[ 7056.940554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967262
[ 7056.940555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967261
[ 7056.940556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967260
[ 7056.940557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967259
[ 7083.864814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967260
[ 7083.865053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967262
[ 7083.865036] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967261
[ 7083.865102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967263
[ 7083.865159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967264
[ 7083.964009] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967265
[ 7095.497485] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967266
[ 7155.158072] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967265
[ 7155.158073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967264
[ 7155.158074] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967263
[ 7155.158076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967262
[ 7155.158077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967261
[ 7155.158078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967260
[ 7155.158079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967259
[ 7155.165525] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967260
[ 7180.285881] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967261
[ 7183.167275] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967262
[ 7183.414146] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967263
[ 7224.653276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967265
[ 7224.662765] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967264
[ 7224.672249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967263
[ 7224.681736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967262
[ 7224.691228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967261
[ 7224.700720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967260
[ 7224.710207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967259
[ 7229.399854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967260
[ 7229.399922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967261
[ 7229.400041] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967262
[ 7229.400099] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967263
[ 7229.400157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967264
[ 7229.400221] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967265
[ 7288.006416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967260
[ 7288.006417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967261
[ 7288.006420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967262
[ 7288.006422] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967263
[ 7288.006605] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967264
[ 7288.006752] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967265
[ 7288.006975] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967266
[ 7353.182856] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967261
[ 7353.182854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967260
[ 7353.182949] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967262
[ 7353.183001] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967263
[ 7353.183401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967264
[ 7353.183737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967266
[ 7353.183726] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967265
[ 7353.184047] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967267
[ 7443.628841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967264
[ 7443.638347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967263
[ 7443.647826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967262
[ 7443.657311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967261
[ 7443.666797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967260
[ 7443.676282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967259
[ 7501.172830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967263
[ 7501.182322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967262
[ 7501.191809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967261
[ 7501.201294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967260
[ 7501.210778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967259
[ 7508.208830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967260
[ 7508.209597] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967261
[ 7508.209670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967262
[ 7522.177756] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967263
[ 7522.177879] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967264
[ 7522.177881] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967265
[ 7550.776037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967264
[ 7550.785525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967263
[ 7550.795016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967262
[ 7550.804501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967261
[ 7550.813985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967260
[ 7550.823470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967259
[ 7556.140566] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967260
[ 7556.140598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967261
[ 7556.140739] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967262
[ 7556.140798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967263
[ 7556.140931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967264
[ 7556.141063] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967265
[ 7556.141111] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967266
[ 7556.141212] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967267
[ 7589.706135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967266
[ 7589.991286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967265
[ 7590.277340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967264
[ 7590.563347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967263
[ 7590.849389] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967262
[ 7591.135445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967261
[ 7591.421473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967260
[ 7591.707517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967259
[ 7606.172838] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204191720+8) 4294967260
[ 7703.615017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967263
[ 7703.624510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967262
[ 7703.634001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967261
[ 7703.643491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967260
[ 7703.652977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967259
[ 7708.933190] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967260
[ 7708.933333] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967261
[ 7708.933473] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967262
[ 7708.933618] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967263
[ 7708.933620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967264
[ 7708.933657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967265
[ 7708.933663] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967266
[ 7758.303406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967265
[ 7758.303407] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967264
[ 7758.303408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967263
[ 7758.303410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967262
[ 7758.303411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967261
[ 7758.303412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967260
[ 7758.303413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967259
[ 7778.439143] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967260
[ 7778.439197] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967261
[ 7778.439279] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967262
[ 7778.439376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967263
[ 7778.439409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967264
[ 7778.439494] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967265
[ 7858.899558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967264
[ 7858.899559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967263
[ 7858.899561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967262
[ 7858.899562] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967261
[ 7858.899563] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967260
[ 7858.899564] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967259
[ 7890.583124] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967260
[ 7890.583147] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967261
[ 7890.583594] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967262
[ 7890.583650] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967263
[ 7890.584141] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967264
[ 7890.584215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967265
[ 7890.584351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967266
[ 7952.730165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967265
[ 7952.739650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967264
[ 7952.749137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967263
[ 7952.758627] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967262
[ 7952.768110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967261
[ 7952.777595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967260
[ 7952.787077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967259
[ 7966.676635] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967260
[ 7966.676647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967261
[ 7966.676670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967263
[ 7966.676658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967262
[ 7966.676686] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967264
[ 7966.677634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967265
[ 7966.708979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967266
[ 8032.243861] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967265
[ 8032.253352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967264
[ 8032.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967263
[ 8032.272336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967262
[ 8032.281826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967261
[ 8032.291317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967260
[ 8032.300800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967259
[ 8043.901514] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967260
[ 8043.901516] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967261
[ 8043.901563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967262
[ 8043.901612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967264
[ 8043.901609] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967263
[ 8043.907672] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967265
[ 8146.468217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967264
[ 8146.477717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967263
[ 8146.487204] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967262
[ 8146.496692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967261
[ 8146.506180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967260
[ 8146.515671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967259
[ 8151.492003] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967260
[ 8151.492121] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967261
[ 8151.492457] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967262
[ 8151.492601] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967264
[ 8151.492590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967263
[ 8151.492750] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967265
[ 8164.821795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967266
[ 8200.519377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967260
[ 8200.519505] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967261
[ 8200.519782] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967262
[ 8200.519805] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967263
[ 8200.520020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967264
[ 8200.520247] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967265
[ 8200.520434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967266
[ 8200.520558] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967267
[ 8231.475052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967266
[ 8231.637009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967265
[ 8231.799001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967264
[ 8231.960972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967263
[ 8232.122948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967262
[ 8232.284905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967261
[ 8232.446896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967260
[ 8232.608893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967259
[ 8251.913368] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967260
[ 8251.913382] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967263
[ 8251.913377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967261
[ 8251.913388] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967265
[ 8251.913387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967264
[ 8251.913379] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967262
[ 8302.630262] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967261
[ 8302.630318] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967263
[ 8302.630275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967262
[ 8302.630250] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967260
[ 8302.630745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967264
[ 8377.488095] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967263
[ 8377.497581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967262
[ 8377.507068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967261
[ 8377.516554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967260
[ 8377.526049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967259
[ 8382.868830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967260
[ 8382.868911] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967261
[ 8382.869109] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967262
[ 8382.869323] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967263
[ 8382.983582] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967264
[ 8407.895450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967263
[ 8407.895452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967262
[ 8407.895455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967261
[ 8407.895457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967260
[ 8407.895459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967259
[ 8454.553018] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967264
[ 8454.570223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967263
[ 8454.587423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967262
[ 8454.587426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967261
[ 8454.604638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967260
[ 8454.621845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967259
[ 8498.349228] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967261
[ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967262
[ 8498.349217] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967260
[ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967263
[ 8498.349317] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967264
[ 8498.349366] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967265
[ 8517.904808] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967266
[ 8551.340824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967265
[ 8551.350319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967264
[ 8551.359809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967263
[ 8551.369300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967262
[ 8551.378788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967261
[ 8551.388272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967260
[ 8551.397756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967259
[ 8599.114364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967265
[ 8599.114366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967264
[ 8599.114367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967263
[ 8599.114368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967262
[ 8599.114370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967261
[ 8599.114371] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967260
[ 8599.114372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967259
[ 8599.117759] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
[ 8623.906310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
[ 8623.909333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967260
[ 8623.909335] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967259
[ 8623.909624] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
[ 8623.910846] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
[ 8623.913364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967262
[ 8625.066563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967263
[ 8651.338552] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204280104+8) 4294967267
[ 8693.930170] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967266
[ 8693.939647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967265
[ 8693.949139] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967264
[ 8693.958637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967263
[ 8693.968135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967262
[ 8693.977622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967261
[ 8693.987106] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967260
[ 8693.996589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967259
[ 8703.470915] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967260
[ 8703.470931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967261
[ 8703.470985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967264
[ 8703.470977] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967263
[ 8703.470957] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967262
[ 8703.471000] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967265
[ 8703.471037] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967266
[ 8771.557361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967265
[ 8771.566858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967264
[ 8771.576344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967263
[ 8771.585830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967262
[ 8771.595316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967261
[ 8771.604797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967260
[ 8771.614273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967259
[ 8772.929338] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967260
[ 8772.929454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967261
[ 8772.929545] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967262
[ 8772.929764] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967263
[ 8772.929816] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967264
[ 8772.929850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967265
[ 8847.837534] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967263
[ 8847.837548] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967264
[ 8847.843801] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967265
[ 8945.958057] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967260
[ 8945.958072] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967261
[ 8945.958101] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967262
[ 8945.958105] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967263
[ 8945.958112] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967264
[ 8945.958137] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967265
[ 8991.941073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967264
[ 8991.941075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967263
[ 8991.941076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967262
[ 8991.941077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967261
[ 8991.941078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967260
[ 8991.941080] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967259
[ 9036.005328] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967260
[ 9036.005409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967261
[ 9036.006275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967262
[ 9036.006348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967263
[ 9036.006436] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967264
[ 9036.006517] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967265
[ 9089.090686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967264
[ 9089.100181] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967263
[ 9089.109670] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967262
[ 9089.119157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967261
[ 9089.128642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967260
[ 9089.138130] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967259
[ 9120.298531] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967266
[ 9120.298540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967265
[ 9120.298547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967264
[ 9120.298555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967263
[ 9120.298568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967262
[ 9120.298581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967261
[ 9120.298599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967260
[ 9120.298613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967259
[ 9120.440293] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967260
[ 9120.440348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967261
[ 9120.440387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967262
[ 9120.440528] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967263
[ 9120.440553] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967264
[ 9120.440625] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967265
[ 9157.076832] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967266
[ 9204.360610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967265
[ 9204.370096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967264
[ 9204.379585] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967263
[ 9204.389070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967262
[ 9204.398557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967261
[ 9204.408047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967260
[ 9204.417534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967259
[ 9235.036854] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967260
[ 9235.036941] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967261
[ 9235.036985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967262
[ 9235.037076] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967263
[ 9235.037103] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967264
[ 9235.037202] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967265
[ 9326.601083] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967260
[ 9326.601219] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967261
[ 9326.601490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967262
[ 9326.601519] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967263
[ 9326.601556] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967264
[ 9326.601644] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967265
[ 9326.601736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967266
[ 9326.607310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967267
[ 9358.046046] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967266
[ 9358.055533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967265
[ 9358.065019] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967264
[ 9358.074517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967263
[ 9358.084002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967262
[ 9358.093495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967261
[ 9358.102997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967260
[ 9358.112485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967259
[ 9361.800927] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967260
[ 9361.801104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967261
[ 9361.801282] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967262
[ 9361.801484] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967263
[ 9361.801527] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967264
[ 9361.801620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967265
[ 9361.801653] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967266
[ 9430.438322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967265
[ 9430.447809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967264
[ 9430.457294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967263
[ 9430.466782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967262
[ 9430.476277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967261
[ 9430.485766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967260
[ 9430.495250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967259
[ 9444.781947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967260
[ 9444.781963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967261
[ 9444.782418] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967262
[ 9444.782605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967263
[ 9444.782662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967264
[ 9444.782718] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967265
[ 9444.782857] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967266
[ 9444.782963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967267
[ 9460.949714] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967266
[ 9461.002252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967265
[ 9461.054806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967264
[ 9461.107370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967263
[ 9461.159913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967262
[ 9461.212473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967261
[ 9461.265014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967260
[ 9461.317558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967259
[ 9472.880989] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967260
[ 9472.881022] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967262
[ 9472.881013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967261
[ 9472.881466] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967263
[ 9472.881585] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967264
[ 9472.881617] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967265
[ 9472.881636] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967266
[ 9473.230016] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967267
[ 9484.525992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967266
[ 9484.607866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967265
[ 9484.690643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967264
[ 9484.773393] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967263
[ 9484.856144] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967262
[ 9484.938897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967261
[ 9485.021665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967260
[ 9485.104419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967259
[ 9565.878800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967265
[ 9565.888283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967264
[ 9565.897767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967263
[ 9565.907250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967262
[ 9565.916734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967261
[ 9565.926219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967260
[ 9565.935703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967259
[ 9569.109038] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032745832+8) 4294967260
[ 9613.943060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032745832+8) 4294967259
[ 9625.083909] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967264
[ 9625.083911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967263
[ 9625.083913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967262
[ 9625.083914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967261
[ 9625.083916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967260
[ 9625.083917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967259
[ 9686.963155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8599505896+8) 4294967259
[ 9706.444605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967260
[ 9706.444608] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967261
[ 9706.444612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967262
[ 9706.444615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967263
[ 9706.444671] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967264
[ 9706.462013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967265
[ 9709.460551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967266
[ 9713.748452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967266
[ 9713.748682] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967267
[ 9713.753732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967266
[ 9713.753735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967265
[ 9713.753736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967264
[ 9713.753737] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967263
[ 9713.753739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967262
[ 9713.753740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967261
[ 9713.753741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967260
[ 9713.753743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967259
[ 9742.956450] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967260
[ 9742.956502] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967261
[ 9742.956503] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967262
[ 9742.956551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967263
[ 9743.152042] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967264
[ 9756.828598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967265
[ 9811.500567] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967264
[ 9811.510052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967263
[ 9811.519534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967262
[ 9811.529017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967261
[ 9811.538503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967260
[ 9811.547984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967259
[ 9816.207521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967266
[ 9816.207576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967265
[ 9816.207628] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967264
[ 9816.207682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967263
[ 9816.207747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967262
[ 9816.207801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967261
[ 9816.207859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967260
[ 9816.207914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967259
[ 9816.211655] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967260
[ 9816.211658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967261
[ 9816.211787] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967262
[ 9816.211882] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967263
[ 9816.211901] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967264
[ 9816.211947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967265
[ 9919.228920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967264
[ 9919.238395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967263
[ 9919.247877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967262
[ 9919.257360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967261
[ 9919.266845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967260
[ 9919.276332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967259
[ 9921.581043] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967261
[ 9921.580965] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967260
[ 9921.581099] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967262
[ 9921.581185] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967263
[ 9921.581304] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967264
[ 9921.581341] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967265
[ 9921.581364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967266
[ 9921.581365] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967267
[ 9921.583872] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967266
[ 9921.583882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967265
[ 9921.583897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967264
[ 9921.583913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967263
[ 9921.583929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967262
[ 9921.583945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967261
[ 9921.583960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967260
[ 9921.583977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967259
[ 9951.448268] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314088+8) 4294967260
[ 9986.682431] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967262
[ 9986.682540] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967263
[ 9986.682661] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967264
[ 9986.687019] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967265
[10026.057977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967264
[10026.057980] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967263
[10026.057982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967262
[10026.057984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967261
[10026.057986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967260
[10026.057987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967259
[10026.060967] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967260
[10026.061269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967261
[10026.061642] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967262
[10026.061728] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967264
[10026.061715] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967263
[10026.061745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967265
[10026.061790] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967266
[10026.061809] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967267
[10055.918459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967266
[10056.153717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967265
[10056.388985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967264
[10056.624244] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967263
[10056.859507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967262
[10057.094768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967261
[10057.330043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967260
[10057.565339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967259
[10060.361434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967260
[10060.361487] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967261
[10060.361634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967262
[10060.361709] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967263
[10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967264
[10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967265
[10060.361723] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967266
[10108.302805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967265
[10108.302808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967264
[10108.302810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967263
[10108.302812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967262
[10108.302814] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967261
[10108.302816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967260
[10108.302818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967259
[10108.307331] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967260
[10108.307515] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967261
[10108.307665] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967263
[10108.307647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967262
[10108.308273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967264
[10108.308454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967266
[10108.308426] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967265
[10108.308720] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967267
[10138.300402] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967266
[10138.516650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967265
[10138.732928] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967264
[10138.949166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967263
[10139.165431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967262
[10139.381678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967261
[10139.597964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967260
[10139.814237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967259
[10144.088915] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746728+8) 4294967260
[10188.320954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746728+8) 4294967259
[10196.658979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967260
[10196.659207] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967261
[10196.659343] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967262
[10196.659430] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967263
[10196.659439] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967264
[10196.660163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967265
[10196.660183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967266
[10279.470741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967265
[10279.480231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967264
[10279.489723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967263
[10279.499214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967262
[10279.508709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967261
[10279.518206] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967260
[10279.527693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967259
[10286.896922] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746984+8) 4294967260
[10304.464308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746984+8) 4294967259
[10305.667186] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967260
[10305.667269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967261
[10305.667270] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967262
[10305.667429] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967263
[10305.667512] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967264
[10305.667590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967265
[10305.667639] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967266
[10334.751820] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
[10415.904902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
[10415.960311] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
[10423.280595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
[10439.777461] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967263
[10439.777592] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967264
[10439.777647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967265
[10439.777786] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967266
[10497.698903] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967265
[10497.698905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967264
[10497.698907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967263
[10497.698908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967262
[10497.698910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967261
[10497.698911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967260
[10497.698912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967259
[10497.701113] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967260
[10497.701118] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967261
[10497.701183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967262
[10497.701490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967263
[10497.701908] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967264
[10497.702132] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967265
[10593.723273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967264
[10593.723280] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967265
[10593.723381] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967266
[10681.179411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967265
[10681.188893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967264
[10681.198368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967263
[10681.207851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967262
[10681.217350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967261
[10681.226842] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967260
[10681.236340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967259
[10689.151359] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967260
[10689.151633] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967261
[10689.151649] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967262
[10689.151700] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967263
[10689.151823] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967264
[10689.152267] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967265
[10689.152370] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967266
[10765.465443] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967260
[10765.465539] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967261
[10765.465942] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967262
[10765.466073] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967263
[10765.471339] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967264
[10765.471352] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967265
[10765.471615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967266
[10810.806942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967265
[10810.816430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967264
[10810.825920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967263
[10810.835410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967262
[10810.844892] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967261
[10810.854379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967260
[10810.863866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967259
[10829.079850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741313832+8) 4294967260
[10849.794350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
[10918.134642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967264
[10918.144122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967263
[10918.153605] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967262
[10918.163087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967261
[10918.172569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967260
[10918.182053] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
[10925.672507] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967260
[10925.672673] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967262
[10925.672533] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967261
[10925.672788] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967263
[10925.672958] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967264
[10925.672978] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967265
[10925.673524] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967266
[10925.673721] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967267
[10938.064956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967266
[10938.263167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967265
[10938.461362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967264
[10938.659537] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967263
[10938.857718] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967262
[10939.055907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967261
[10939.254110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967260
[10939.452289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967259
[10942.625740] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967260
[10942.625791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967262
[10942.625850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967264
[10942.625849] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967263
[10942.625789] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967261
[10942.626403] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967265
[11020.726643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967264
[11020.726645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967263
[11020.726646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967262
[11020.726648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967261
[11020.726649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967260
[11020.726651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967259
[11045.762697] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967262
[11045.763802] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967263
[11045.763966] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967264
[11045.764660] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967265
[11072.427987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967266
[11072.429693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967265
[11072.429971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967264
[11072.430020] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967263
[11072.430076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967262
[11072.430127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967261
[11072.430180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967260
[11072.430234] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967259
[11103.176662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967260
[11103.176760] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967261
[11103.176914] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967262
[11103.176947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967263
[11103.177351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967264
[11103.243210] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967265
[11197.368568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967264
[11197.378067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967263
[11197.387571] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967262
[11197.397061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967261
[11197.406551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967260
[11197.416027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967259
[11213.302960] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741352808+8) 4294967260
[11214.005987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967265
[11214.005989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967264
[11214.005990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967263
[11214.005991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967262
[11214.005992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967261
[11214.005994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967260
[11214.005995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967259
[11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967260
[11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967261
[11251.017233] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967263
[11251.017238] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967266
[11251.017236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967265
[11251.017234] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967264
[11251.017231] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967262
[11269.980634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967267
[11289.588526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967266
[11289.598008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967265
[11289.607492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967264
[11289.616977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967263
[11289.626462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967262
[11289.635951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967261
[11289.645439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967260
[11289.654929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967259
[11289.955406] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967260
[11289.955748] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967261
[11289.955828] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967262
[11289.956014] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967263
[11310.687017] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967264
[11344.753308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967266
[11344.753344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967265
[11344.753381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967264
[11344.753417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967263
[11344.753453] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967262
[11344.753488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967261
[11344.753524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967260
[11344.753559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967259
[11372.057310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967260
[11372.057453] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967261
[11372.058126] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967262
[11372.058225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967263
[11372.058236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967264
[11372.058445] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967265
[11477.692568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967264
[11477.702057] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967263
[11477.711549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967262
[11477.721041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967261
[11477.730527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967260
[11477.740015] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967259
[11484.738964] __add_stripe_bio: md127: start ff2721beec8c2fa0(28725142504+8) 4294967260
[11516.590602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28725142504+8) 4294967259
[11541.514580] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967260
[11541.514657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967261
[11541.514736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967262
[11541.514795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967263
[11541.514937] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967264
[11541.514959] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967265
[11541.515020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967266
[11541.515178] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967267
[11589.245255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967266
[11589.530527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967265
[11589.815730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967264
[11590.100882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967263
[11590.386071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967262
[11590.671228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967261
[11590.956368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967260
[11591.241504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967259
[11665.205805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967264
[11665.215300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967263
[11665.224797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967262
[11665.234283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967261
[11665.243770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967260
[11665.253265] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967259
[11676.491376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967260
[11676.491588] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967261
[11676.491637] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967262
[11676.491798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967263
[11676.492010] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967264
[11785.703215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967260
[11787.558791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967261
[11787.558892] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967262
[11791.614199] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967263
[11793.388452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967264
[11795.421600] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967265
[11795.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967266
[11795.424300] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967267
[11795.426502] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967266
[11795.426503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967265
[11795.426504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967264
[11795.426505] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967263
[11795.426506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967262
[11795.426507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967261
[11795.426509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967260
[11795.426510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967259
[11871.476819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967260
[11871.486321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967259
[11919.615037] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967260
[11919.615114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967261
[11919.615374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967262
[11919.615395] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967263
[11919.615491] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967264
[11928.774840] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967267
[12000.001506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967264
[12000.001507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967263
[12000.001509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967262
[12000.001510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967261
[12000.001512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967260
[12000.001514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967259
[12033.086368] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967260
[12033.086440] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967261
[12033.086702] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967262
[12033.086790] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967263
[12033.087023] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967264
[12071.801701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967262
[12071.801733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967264
[12071.801727] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967263
[12071.801859] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967265
[12071.801934] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967266
[12147.838099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967265
[12147.838104] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967264
[12147.838108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967263
[12147.838113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967262
[12147.838117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967261
[12147.838122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967260
[12147.838131] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967259
[12161.825218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967260
[12171.278213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967261
[12171.278308] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967262
[12171.278349] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967263
[12171.278418] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967264
[12171.278481] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967265
[12225.694791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967264
[12225.704279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967263
[12225.713774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967262
[12225.723264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967261
[12225.732759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967260
[12225.742248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967259
[12241.217982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967260
[12241.218022] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967261
[12241.218156] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967262
[12241.218291] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967263
[12241.218712] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967264
[12241.225590] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967265
[12324.241931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967264
[12324.251421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967263
[12324.260905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967262
[12324.270390] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967261
[12324.279874] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967260
[12324.289362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967259
[12330.627283] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967261
[12330.627282] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967260
[12330.627356] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967262
[12330.627452] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967263
[12330.627459] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967264
[12330.627476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967265
[12360.643250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967265
[12360.643251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967264
[12360.643253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967263
[12360.643254] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967262
[12360.643256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967261
[12360.643257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967260
[12360.643258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967259
[12412.055085] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967262
[12412.055185] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967263
[12412.055346] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967264
[12412.055358] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967265
[12502.916463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967264
[12502.925955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967263
[12502.935437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967262
[12502.944921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967261
[12502.954403] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967260
[12502.963891] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967259
[12508.172163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28994103208+8) 4294967260
[12569.394168] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28994103208+8) 4294967259
[12669.806358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967264
[12669.815852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967263
[12669.825349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967262
[12669.834848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967261
[12669.844337] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967260
[12669.853824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967259
[12681.132403] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967260
[12681.132623] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967261
[12681.132903] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967262
[12681.133118] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967263
[12681.133360] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967264
[12681.133472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967265
[12681.133687] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967266
[12756.131024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967265
[12756.140520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967264
[12756.150010] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967263
[12756.159496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967262
[12756.168984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967261
[12756.178470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967260
[12756.187958] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967259
[12761.752380] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967260
[12761.752555] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967261
[12761.752566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967262
[12761.752717] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967263
[12761.752864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967264
[12761.753575] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967265
[12841.108192] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967264
[12841.117686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967263
[12841.127177] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967262
[12841.136667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967261
[12841.146150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967260
[12841.155642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967259
[12854.788520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967262
[12854.789006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967263
[12854.790480] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967264
[12854.792345] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967265
[12854.792371] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967266
[12854.792648] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967267
[12854.796137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967266
[12854.796140] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967265
[12854.796143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967264
[12854.796145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967263
[12854.796147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967262
[12854.796149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967261
[12854.796151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967260
[12854.796152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967259
[12979.496382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967265
[12979.505867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967264
[12979.515357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967263
[12979.524845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967262
[12979.534338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967261
[12979.543825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967260
[12979.553315] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967259
[12987.839356] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032747304+8) 4294967260
[13019.716541] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032747304+8) 4294967259
[13023.790667] __add_stripe_bio: md127: start ff2721beec8c2fa0(8053065000+8) 4294967260
[13166.159630] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8053065000+8) 4294967259
[13172.153701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967260
[13172.153856] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967261
[13172.154183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967262
[13172.154307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967263
[13172.154320] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967264
[13172.154325] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967265
[13172.154327] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967266
[13172.154540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967267
[13184.154929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967266
[13184.395309] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967265
[13184.635656] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967264
[13184.876002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967263
[13185.116361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967262
[13185.356722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967261
[13185.597099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967260
[13185.837445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967259
[13200.462739] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967260
[13200.463373] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967261
[13200.463433] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967262
[13200.463686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967263
[13200.463718] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967264
[13200.463750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967265
[13200.463801] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967266
[13200.463824] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967267
[13218.832166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967266
[13218.964853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967265
[13219.097514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967264
[13219.230197] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967263
[13219.362875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967262
[13219.495536] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967261
[13219.628221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967260
[13219.760852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967259
[13284.342646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967264
[13284.352133] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967263
[13284.361618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967262
[13284.371115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967261
[13284.380606] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967260
[13284.390092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967259
[13290.205839] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312350184+8) 4294967260
[13328.994695] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312350184+8) 4294967259
[13358.709396] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967261
[13358.709379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967260
[13358.709414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967262
[13358.709435] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967263
[13358.709475] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967264
[13362.737444] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967265
[13362.737666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967266
[13386.563843] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967260
[13386.563968] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967261
[13386.564092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967262
[13386.564227] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967263
[13386.564297] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967264
[13386.564364] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967265
[13386.564532] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967266
[13425.701678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967265
[13425.701680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967264
[13425.701681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967263
[13425.701682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967262
[13425.701684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967261
[13425.701686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967260
[13425.701688] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967259
[13491.009481] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967264
[13491.018974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967263
[13491.028463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967262
[13491.037945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967261
[13491.047430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967260
[13491.056918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967259
[13493.360976] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967260
[13493.361476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967261
[13493.361592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967262
[13493.366880] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967263
[13493.367183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967264
[13493.367399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967265
[13493.367635] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967266
[13493.367706] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967267
[13524.736988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967266
[13524.746476] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967265
[13524.755962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967264
[13524.765450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967263
[13524.774943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967262
[13524.784436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967261
[13524.793931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967260
[13524.803422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967259
[13529.444566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967260
[13529.444628] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967261
[13529.445249] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967262
[13529.445330] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967264
[13529.445307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967263
[13529.445578] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967265
[13529.445594] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967266
[13568.836756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967265
[13568.836757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967264
[13568.836759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967263
[13568.836766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967262
[13568.836768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967261
[13568.836769] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967260
[13568.836770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967259
[13595.486041] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967260
[13595.486114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967261
[13595.486450] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967262
[13595.486544] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967263
[13595.486756] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967264
[13595.486807] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967265
[13595.487127] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967266
[13684.444417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967265
[13684.453904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967264
[13684.463391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967263
[13684.472878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967262
[13684.482361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967261
[13684.491850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967260
[13684.501340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967259
[13686.643818] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967260
[13686.643853] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967261
[13686.643948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967262
[13686.643993] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967263
[13686.644144] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967264
[13686.644406] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967265
[13686.644525] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967266
[13734.793297] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967265
[13734.809137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967264
[13744.445102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967263
[13744.454593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967262
[13744.464083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967261
[13744.473560] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967260
[13744.483051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967259
[13820.332081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967264
[13820.341572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967263
[13820.351058] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967262
[13820.360550] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967261
[13820.370039] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967260
[13820.379525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967259
[13828.887574] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967260
[13828.888698] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967261
[13828.888811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967262
[13828.888838] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967263
[13828.888877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967264
[13828.889012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967265
[13828.889087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967266
[13939.263249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967265
[13939.272744] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967264
[13939.282233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967263
[13939.291721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967262
[13939.301203] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967261
[13939.310687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967260
[13939.320173] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967259
[13949.622362] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032748904+8) 4294967260
[13975.927477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324672360+8) 4294967267
[13975.932834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967266
[13975.932837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967265
[13975.932840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967264
[13975.932843] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967263
[13975.932845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967262
[13975.932848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967261
[13975.932851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967260
[13975.932853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967259
[14002.581980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967260
[14002.582096] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967261
[14002.582385] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967262
[14002.582558] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967263
[14002.582619] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967264
[14002.582667] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967265
[14119.130023] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967264
[14119.139513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967263
[14119.148996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967262
[14119.158486] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967261
[14119.167972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967260
[14119.177456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967259
[14124.072930] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967260
[14124.073027] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967262
[14124.073025] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967261
[14124.073126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967263
[14124.073129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967264
[14124.073379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967265
[14210.467642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967264
[14210.467644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967263
[14210.467645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967262
[14210.467647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967261
[14210.467650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967260
[14210.467652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967259
[14239.699953] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312349928+8) 4294967260
[14343.004904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312349928+8) 4294967259
[14351.607882] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967260
[14351.607891] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967262
[14351.607979] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967263
[14351.607884] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967261
[14351.608559] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967264
[14351.608811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967265
[14351.691948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967266
[14460.596492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967265
[14460.596493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967264
[14460.596494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967263
[14460.596495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967262
[14460.596496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967261
[14460.596497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967260
[14460.596499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967259
[14460.598694] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967260
[14460.598948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967261
[14460.599105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967262
[14460.599222] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967263
[14460.599261] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967264
[14460.599376] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967265
[14541.632593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967264
[14541.642093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967263
[14541.651581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967262
[14541.661069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967261
[14541.670554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967260
[14541.680042] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967259
[14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967260
[14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967261
[14547.547058] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967262
[14547.547167] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967263
[14547.547468] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967264
[14547.547552] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967265
[14547.547661] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967266
[14547.547697] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967267
[14575.315268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967266
[14575.596007] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967265
[14575.876716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967264
[14576.157450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967263
[14576.438196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967262
[14576.718943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967261
[14576.999682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967260
[14577.280400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967259
[14582.737304] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749480+8) 4294967260
[14636.539506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749480+8) 4294967259
[14638.880107] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749864+8) 4294967260
[14657.493830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749864+8) 4294967259
[14675.212921] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967260
[14675.213033] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967261
[14675.213091] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967262
[14675.213105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967263
[14675.213429] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967264
[14675.213477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967265
[14675.213877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967266
[14737.052888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967265
[14737.062372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967264
[14737.071858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967263
[14737.081341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967262
[14737.090827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967261
[14737.100311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967260
[14737.109810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967259
[14743.382147] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967260
[14743.382495] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967261
[14743.382520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967262
[14743.382595] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967263
[14743.382607] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967264
[14743.382646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967265
[14743.382683] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967266
[14836.115483] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967265
[14836.124971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967264
[14836.134457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967263
[14836.143943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967262
[14836.153427] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967261
[14836.162908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967260
[14836.172392] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967259
[14867.977410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967265
[14867.977411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967264
[14867.977413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967263
[14867.977414] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967262
[14867.977416] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967261
[14867.977418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967260
[14867.977419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967259
[14867.982046] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967260
[14867.982289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967261
[14867.982319] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967262
[14867.982377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967263
[14867.982398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967264
[14867.982409] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967265
[14906.532978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967264
[14906.532983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967263
[14906.532987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967262
[14906.532991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967261
[14906.532995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967260
[14906.533001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967259
[14906.537331] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967260
[14906.537374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967261
[14906.537389] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967262
[14906.537397] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967263
[14906.537413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967264
[14906.537417] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967265
[14906.538256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967266
[15000.553174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967265
[15000.562663] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967264
[15000.572145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967263
[15000.581619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967262
[15000.591109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967261
[15000.600591] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967260
[15000.610079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967259
[15038.840534] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967265
[15038.840569] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967266
[15038.918437] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967267
[15072.110477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967266
[15072.119963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967265
[15072.129445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967264
[15072.138931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967263
[15072.148422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967262
[15072.157919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967261
[15072.167410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967260
[15072.176895] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967259
[15094.337077] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967260
[15094.337106] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967262
[15094.337126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967264
[15094.337145] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967265
[15094.337125] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967263
[15094.337095] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967261
[15094.337202] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967266
[15094.337864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967267
[15109.185642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967266
[15109.231325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967265
[15109.276996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967264
[15109.322684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967263
[15109.368357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967262
[15109.414059] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967261
[15109.459734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967260
[15109.505426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967259
[15109.940853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967266
[15109.940854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967265
[15109.940856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967264
[15109.940857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967263
[15109.940858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967262
[15109.940860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967261
[15109.940862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967260
[15109.940863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967259
[15109.943869] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032750824+8) 4294967260
[15109.946364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032750824+8) 4294967259
[15143.938195] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967260
[15143.938199] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967261
[15143.938502] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967262
[15143.938860] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967263
[15143.939079] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967265
[15143.939055] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967264
[15194.321503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967264
[15194.330986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967263
[15194.340472] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967262
[15194.349954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967261
[15194.359431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967260
[15194.368920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967259
[15202.981864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967260
[15202.982006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967262
[15202.982049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967263
[15202.981938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967261
[15202.982750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967264
[15284.694932] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967263
[15284.704421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967262
[15284.713913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967261
[15284.723400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967260
[15284.732888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967259
[15293.005593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967264
[15293.005596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967263
[15293.005597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967262
[15293.005599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967261
[15293.005602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967260
[15293.005603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967259
[15293.008211] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032751272+8) 4294967260
[15293.009308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032751272+8) 4294967259
[15361.123918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967265
[15361.133405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967264
[15361.142894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967263
[15361.152384] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967262
[15361.161896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967261
[15361.171380] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967260
[15361.180864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967259
[15364.021538] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967260
[15364.021666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967261
[15364.022045] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967262
[15364.022114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967264
[15364.022092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967263
[15364.022137] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967265
[15364.022367] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967266
[15364.022522] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967267
[15374.355722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967266
[15374.554797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967265
[15374.753840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967264
[15374.952890] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967263
[15375.151938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967262
[15375.351041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967261
[15375.550089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967260
[15375.749178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967259
[15475.090773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967263
[15475.090778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967262
[15475.090782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967261
[15475.090790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967260
[15475.090795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967259
[15512.462087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530332008+8) 4294967260
[15605.182723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29530332008+8) 4294967259
[15610.777819] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967260
[15610.778112] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967261
[15610.778158] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967262
[15610.778540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967263
[15610.778741] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967264
[15610.778768] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967265
[15610.779126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967266
[15610.779136] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967267
[15625.092720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967266
[15625.291753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967265
[15625.490794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967264
[15625.689869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967263
[15625.888922] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967262
[15626.087961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967261
[15626.287002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967260
[15626.486060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967259
[15631.421893] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967260
[15631.422458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967261
[15631.422563] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967262
[15631.422804] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967263
[15631.422822] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967264
[15631.422885] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967265
[15649.575279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967266
[15684.592107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967265
[15684.601597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967264
[15684.611088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967263
[15684.620575] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967262
[15684.630060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967261
[15684.639545] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967260
[15684.649027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967259
[15690.850975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967260
[15690.851550] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967261
[15690.851939] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967262
[15690.852053] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967263
[15690.852178] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967264
[15690.852281] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967265
[15690.852313] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967266
[15735.620596] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967260
[15735.620646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967261
[15735.620685] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967263
[15735.620686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967264
[15735.620652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967262
[15735.621500] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967265
[15816.149770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967264
[15816.149772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967263
[15816.149774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967262
[15816.149776] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967261
[15816.149778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967260
[15816.149780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967259
[15844.779214] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967260
[15844.779218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967261
[15934.023215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967265
[15934.032699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967264
[15934.042184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967263
[15934.051668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967262
[15934.061153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967261
[15934.070639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967260
[15934.080128] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967259
[15935.505198] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967260
[15935.505344] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967261
[15935.505684] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967262
[15951.816975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967263
[15951.817541] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967264
[15951.817733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967265
[16048.370516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967264
[16048.370517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967263
[16048.370518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967262
[16048.370520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967261
[16048.370521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967260
[16048.370522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967259
[16048.372719] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967260
[16048.373083] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967261
[16048.373765] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967262
[16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967263
[16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967264
[16048.373936] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967265
[16048.373938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967266
[16048.373966] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967267
[16099.152109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967266
[16099.438210] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967265
[16099.724249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967264
[16100.011268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967263
[16100.298239] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967262
[16100.585188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967261
[16100.872155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967260
[16101.159163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967259
[16105.965912] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967260
[16105.966207] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967261
[16105.966279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967262
[16105.966399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967263
[16105.966643] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967264
[16105.966725] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967265
[16163.037754] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967260
[16163.037836] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967261
[16163.037883] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967262
[16163.038018] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967263
[16163.038537] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967264
[16163.039472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967265
[16163.263169] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967266
[16264.133495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967265
[16264.142981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967264
[16264.152464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967263
[16264.161951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967262
[16264.171441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967261
[16264.180935] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967260
[16264.190424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967259
[16266.992432] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967260
[16266.992711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967261
[16266.993213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967262
[16266.993398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967263
[16266.993458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967264
[16290.695547] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967265
[16290.695556] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967266
[16379.808266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967265
[16379.817751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967264
[16379.827236] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967263
[16379.836720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967262
[16379.846205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967261
[16379.855697] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967260
[16379.865187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967259
[16387.725516] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967260
[16387.725798] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967261
[16387.725917] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967262
[16387.726352] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967263
[16387.726414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967264
[16387.726707] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967265
[16408.022981] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967266
[16505.753964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967265
[16505.763456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967264
[16505.772952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967263
[16505.782444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967262
[16505.791921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967261
[16505.801405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967260
[16505.810889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967259
[16515.475620] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967261
[16515.475613] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967260
[16515.475711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967262
[16515.475844] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967263
[16515.475958] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967264
[16515.476377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967265
[16515.476744] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967266
[16534.160611] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967267
[16554.448056] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967266
[16554.457546] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967265
[16554.467022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967264
[16554.476504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967263
[16554.485979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967262
[16554.495463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967261
[16554.504953] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967260
[16554.514442] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967259
[16555.835592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967260
[16555.835847] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967261
[16555.836140] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967262
[16555.836279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967263
[16576.074289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967264
[16576.074652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967265
[16576.075049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967266
[16702.114836] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967265
[16702.124323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967264
[16702.133808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967263
[16702.143300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967262
[16702.152799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967261
[16702.162289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967260
[16702.171773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967259
[16710.388044] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967260
[16710.388157] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967261
[16710.388256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967262
[16710.388347] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967264
[16710.388390] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967265
[16710.388410] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967266
[16710.388285] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967263
[16710.389466] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967267
[16726.681690] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967266
[16732.076989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967265
[16732.227720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967264
[16732.378463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967263
[16732.529207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967262
[16732.679930] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967261
[16732.830681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967260
[16732.981424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967259
[16739.691982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967261
[16739.691980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967260
[16739.692049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967262
[16739.692753] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967263
[16739.693143] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967264
[16739.693286] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967265
[16739.693391] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967266
[16796.194339] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967260
[16796.194398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967261
[16796.194422] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967262
[16796.194483] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967263
[16796.194946] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967264
[16796.195038] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967265
[16796.195499] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967266
[16870.648462] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753896+8) 4294967260
[16898.893981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753896+8) 4294967259
[16967.311945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967265
[16967.321437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967264
[16967.330924] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967263
[16967.340412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967262
[16967.349896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967261
[16967.359379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967260
[16967.368867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967259
[16976.071394] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967260
[16976.071413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967261
[16976.071469] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967262
[16976.071568] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967263
[16976.071677] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967264
[16976.071732] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967266
[16976.071700] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967265
[16976.072068] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967267
[16990.311360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967266
[16990.481108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967265
[16990.650832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967264
[16990.820572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967263
[16990.990305] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967262
[16991.160043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967261
[16991.329786] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967260
[16991.499551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967259
[16996.987839] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754024+8) 4294967260
[17047.151615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754024+8) 4294967259
[17115.879905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967264
[17115.889391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967263
[17115.898868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967262
[17115.908352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967261
[17115.917837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967260
[17115.927320] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967259
[17116.400129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967260
[17116.400218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967261
[17116.400665] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967262
[17116.400972] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967263
[17116.401012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967264
[17116.401073] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967265
[17116.401094] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967266
[17173.631876] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967265
[17173.631877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967264
[17173.631878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967263
[17173.631880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967262
[17173.631881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967261
[17173.631883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967260
[17173.631885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967259
[17201.424309] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753960+8) 4294967260
[17209.574673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753960+8) 4294967259
[17212.610080] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967260
[17212.610550] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967261
[17226.574613] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967262
[17226.574818] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967263
[17226.575156] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967264
[17226.575266] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967265
[17226.575729] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967266
[17296.342998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967265
[17296.352083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967264
[17296.361178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967263
[17296.370271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967262
[17296.379366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967261
[17296.388464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967260
[17296.397557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967259
[17297.711781] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967260
[17297.712071] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967261
[17311.049521] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967262
[17311.049595] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967263
[17391.178025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967262
[17391.187127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967261
[17391.196230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967260
[17391.205325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967259
[17397.357339] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967260
[17397.358064] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967261
[17397.358191] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967262
[17406.112100] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967263
[17460.063716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967262
[17460.072623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967261
[17460.081520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967260
[17460.090422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967259
[17464.959257] __add_stripe_bio: md127: start ff2721beec8c2fa0(75624+8) 4294967260
[17482.594510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(75624+8) 4294967259
[17508.091641] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967261
[17508.091640] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967260
[17508.091647] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967262
[17532.456256] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967260
[17532.456294] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967261
[17532.456309] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967262
[17572.776358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967261
[17572.785257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967260
[17572.794163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967259
[17594.427109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(12348140520+8) 4294967259
[17631.571482] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967262
[17633.896087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967261
[17633.904990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967260
[17633.913889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967259
[17640.670153] __add_stripe_bio: md127: start ff2721beec8c2fa0(42344+8) 4294967262
[17661.740739] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967264
[17661.740869] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967265
[17691.866848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967264
[17691.866850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967263
[17691.866851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967262
[17691.866853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967261
[17691.866854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967260
[17691.866855] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967259
[17711.055783] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967260
[17711.055850] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967262
[17711.055807] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967261
[17753.659012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967261
[17753.667920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967260
[17753.676822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967259
[17756.839874] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754472+8) 4294967260
[17761.904589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754472+8) 4294967259
[17764.608952] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967260
[17764.609156] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967262
[17764.609117] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967261
[17764.609992] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967263
[17785.372101] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967264
[17785.480370] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967265
[17831.956995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967264
[17831.965897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967263
[17831.974795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967262
[17831.983692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967261
[17831.992592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967260
[17832.001495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967259
[17834.344591] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755304+8) 4294967260
[17843.122828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032755304+8) 4294967259
[17845.582553] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967260
[17845.582666] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967261
[17845.583154] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967262
[17845.583190] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967264
[17845.583179] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967263
[17895.265867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967263
[17895.274772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967262
[17895.283673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967261
[17895.292578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967260
[17895.301470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967259
[17898.047030] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967260
[17898.048282] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967261
[17898.049252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967260
[17898.049253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967259
[17910.857571] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967260
[17910.857605] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967261
[17940.805214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967260
[17940.805216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967259
[17953.692889] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967261
[17953.692929] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
[17953.693143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
[17953.694264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
[18003.530258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
[18003.539162] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967260
[18003.548066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967259
[18009.481434] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26216+8) 4294967259
[18009.492293] __add_stripe_bio: md127: start ff2721beec8c2fa0(25704+8) 4294967260
[18048.069279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25704+8) 4294967259
[18048.552558] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967260
[18048.552796] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967261
[18048.552825] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967262
[18048.554933] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967263
[18081.808216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967262
[18081.808217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967261
[18081.808219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967260
[18081.808220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967259
[18081.819956] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967260
[18081.820706] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967261
[18110.361331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967260
[18110.361332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967259
[18160.565496] __add_stripe_bio: md127: start ff2721beec8c2fa0(10792+8) 4294967263
[18169.306917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967260
[18169.306918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967259
[18169.318095] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967260
[18169.319212] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967261
[18169.394456] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967262
[18211.597621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(61800+8) 4294967259
[18261.334926] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967260
[18261.335380] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967261
[18297.192489] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25501361192+8) 4294967259
[18332.815982] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967262
[18332.816950] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967263
[18332.819467] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967264
[18332.819799] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967265
[18332.820819] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967266
[18363.308810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967265
[18363.308813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967264
[18363.308816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967263
[18363.308819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967262
[18363.308822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967261
[18363.308825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967260
[18363.308828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967259
[18412.849619] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967262
[18412.850582] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967263
[18412.850911] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967264
[18412.851264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967265
[18443.565725] __add_stripe_bio: md127: start ff2721beec8c2fa0(28454161000+8) 4294967260
[18473.028395] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967260
[18473.029608] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967261
[18502.557646] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967262
[18502.557723] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967263
[18502.558152] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967264
[18502.558930] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967265
[18502.559041] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967266
[18502.563022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967265
[18502.563024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967264
[18502.563025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967263
[18502.563026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967262
[18502.563027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967261
[18502.563029] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967260
[18502.563030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967259
[18560.133303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331007720+8) 4294967259
[18589.564077] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967260
[18589.564089] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967261
[18589.564670] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967262
[18589.565137] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967263
[18589.565700] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967264
[18589.566003] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967265
[18638.817896] __add_stripe_bio: md127: start ff2721beec8c2fa0(331165224+8) 4294967260
[18639.851587] __add_stripe_bio: md127: start ff2721beec8c2fa0(4563405800+8) 4294967260
[18721.230354] __add_stripe_bio: md127: start ff2721beec8c2fa0(6174129128+8) 4294967260
[18753.264400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6174129128+8) 4294967259
[18814.918267] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755240+8) 4294967260
[18817.035728] __add_stripe_bio: md127: start ff2721beec8c2fa0(331192424+8) 4294967267
[18817.037803] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967266
[18817.037809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967265
[18817.037812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967264
[18817.037815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967263
[18817.037818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967262
[18817.037822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967261
[18817.037825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967260
[18817.037827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967259
[18847.022837] __add_stripe_bio: md127: start ff2721beec8c2fa0(8589935656+8) 4294967260
[18931.949431] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530321832+8) 4294967260
[19054.852844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967265
[19054.852846] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967264
[19054.852847] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967263
[19054.852849] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967262
[19054.852850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967261
[19054.852852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967260
[19054.852853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967259
[19104.480492] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967264
[19104.480523] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967265
[19162.254665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967263
[19162.254666] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967262
[19162.254668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967261
[19162.254669] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967260
[19162.254671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967259
[19194.644616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967260
[19194.644618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967259
[19225.730035] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967260
[19225.730135] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967261
[19225.730341] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967262
[19225.733024] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967263
[19225.733509] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967264
[19225.799551] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967265
[19250.693927] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967266
[19251.803761] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967260
[19251.805818] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967261
[19251.807214] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967262
[19251.807230] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967263
[19251.807441] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967264
[19251.807684] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967265
[19284.419215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967265
[19284.419218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967264
[19284.419221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967263
[19284.419222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967262
[19284.419225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967261
[19284.419228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967260
[19284.419230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967259
[19324.123801] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967260
[19324.124880] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967261
[19389.626363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967264
[19389.626366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967263
[19389.626370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967262
[19389.626373] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967261
[19389.626376] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967260
[19389.626379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967259
[19411.000068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967264
[19411.000070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967263
[19411.000071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967262
[19411.000073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967261
[19411.000075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967260
[19411.000076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967259
[19442.885291] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967260
[19442.885494] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967261
[19442.885496] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967262
[19442.885575] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967263
[19500.040964] __add_stripe_bio: md127: start ff2721beec8c2fa0(536935976+8) 4294967260
[19503.516938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967263
[19503.516939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967262
[19503.516941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967261
[19503.516942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967260
[19503.516944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967259
[19531.506729] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967261
[19531.507247] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967262
[19531.510481] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967263
[19559.370264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967262
[19559.370268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967261
[19559.370272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967260
[19559.370275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967259
[19590.464792] __add_stripe_bio: md127: start ff2721beec8c2fa0(7788215976+8) 4294967260
[19620.633883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(7788215976+8) 4294967259
[19650.250748] __add_stripe_bio: md127: start ff2721beec8c2fa0(536913192+8) 4294967260
[19680.643891] __add_stripe_bio: md127: start ff2721beec8c2fa0(20135746152+8) 4294967260
[19708.804030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(20135746152+8) 4294967259
[19737.574540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967260
[19737.574543] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967259
[19765.378569] __add_stripe_bio: md127: start ff2721beec8c2fa0(536900904+8) 4294967261
[19794.831033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967260
[19821.381894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967259
[19821.429688] __add_stripe_bio: md127: start ff2721beec8c2fa0(536898024+8) 4294967264
[19856.960152] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967260
[19856.964598] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967261
[19856.967055] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967262
[19879.048926] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967263
[19879.048937] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967264
[19887.395626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967263
[19887.395631] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967262
[19887.395634] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967261
[19887.395637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967260
[19887.395639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967259
[19887.406610] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967260
[19916.087911] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967261
[19918.951492] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967260
[19947.259645] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967261
[19983.717648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967260
[19983.717650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967259
[19983.723154] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
[19983.723284] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967261
[19983.723330] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967262
[19983.723447] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967263
[20015.225720] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967264
[20015.225737] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967265
[20015.233248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967264
[20015.233249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967263
[20015.233250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967262
[20015.233251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967261
[20015.233252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967260
[20015.233253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967259
[20039.634420] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
[20059.881519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967263
[20059.881521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967262
[20059.881522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967261
[20059.881523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967260
[20059.881524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967259
[20091.703960] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967260
[20091.704062] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967261
[20091.704130] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967262
[20091.704371] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967263
[20091.704597] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967264
[20091.705014] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967265
[20091.705043] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967266
[20091.705080] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967267
[20107.172534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967266
[20107.416045] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967265
[20107.659569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967264
[20107.903083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967263
[20108.146684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967262
[20108.390242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967261
[20108.633740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967260
[20108.877229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967259
[20125.925086] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967260
[20125.925103] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967261
[20128.394916] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967262
[20128.583655] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967263
[20132.751983] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967264
[20138.332744] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967260
[20138.333973] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967261
[20138.334178] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967262
[20138.335009] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967263
[20138.335115] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967264
[20138.335265] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967265
[20138.338858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967264
[20138.338860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967263
[20138.338862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967262
[20138.338864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967261
[20138.338866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967260
[20138.338868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967259
[20166.832981] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967260
[20166.833229] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967261
[20166.834027] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967262
[20196.134888] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967263
[20199.500306] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967263
[20199.500310] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967262
[20199.500313] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967261
[20199.500317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967260
[20199.500321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
[20199.942600] __add_stripe_bio: md127: start ff2721beec8c2fa0(555373992+8) 4294967260
[20199.944367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
[20245.088563] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967260
[20245.088642] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967261
[20245.088687] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967262
[20245.088777] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967263
[20245.091384] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967264
[20245.091670] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967265
[20245.091900] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967266
[20245.092055] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967267
[20271.055283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967266
[20271.064573] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967265
[20271.073864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967264
[20271.083165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967263
[20275.933633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967262
[20275.942927] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967261
[20275.952228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967260
[20275.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967259
[20280.815537] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967260
[20280.816905] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967261
[20442.104270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967260
[20443.448533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967259
[20445.747762] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967260
[20445.747900] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967261
[20445.747918] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967262
[20445.748615] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967263
[20494.667635] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967264
[20494.667769] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967265
[20524.978466] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967264
[20524.987753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967263
[20524.997049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967262
[20525.006349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967261
[20533.505202] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967260
[20533.514488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967259
[20535.464796] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967260
[20535.465312] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967261
[20547.361843] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967262
[20547.362543] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967263
[20547.362994] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967264
[20565.098049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967263
[20565.098051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967262
[20565.098052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967261
[20565.098054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967260
[20565.098055] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967259
[20565.099574] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
[20565.099733] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
[20565.099960] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
[20609.002609] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
[20609.011900] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
[20609.021187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
[20612.483895] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
[20612.484023] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
[20612.484674] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
[20641.495298] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
[20641.504590] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
[20641.513885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-01  7:56                                                                       ` Christian Theune
@ 2024-11-01  8:33                                                                         ` Christian Theune
  2024-11-03 15:54                                                                           ` Christian Theune
                                                                                             ` (2 more replies)
  0 siblings, 3 replies; 88+ messages in thread
From: Christian Theune @ 2024-11-01  8:33 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

A thought about the high numbers: they look like relatively regular “many bits on” patterns:

>>> bin(4294967264)
‘0b11111111111111111111111111100000'

I think I enabled the bitmap online so MAYBE it circumvents initialisation of some memory instead of when this gets up during a regular boot. I’m not seeing those numbers anymore after booting cleanly.

I’m letting things run for a few hours again, but I’m still concerned that the printk overhead doesn’t allow the system to actually trigger the issue again.

If I’m not getting things stuck again, then I’ll double check to ensure that my reproduce is still valid on 6.11.5 without the debugging patch.

Christian

> On 1. Nov 2024, at 08:56, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
> ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.
> 
> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
> 
> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.
> 
> [  405.347345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22301786792+8) 4294967259
> [  432.542465] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967260
> [  432.542469] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967261
> [  434.272964] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967262
> [  434.273175] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967263
> [  434.273189] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967264
> [  434.273285] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967265
> [  434.274063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967264
> [  434.274066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967263
> [  434.274070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967262
> [  434.274073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967261
> [  434.274078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967260
> [  434.274083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967259
> [  434.276609] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967260
> [  434.278939] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967261
> [  464.922354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967260
> [  464.931833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967259
> [  466.964557] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967260
> [  466.964616] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967261
> [  474.399930] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967262
> [  474.451451] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967263
> [  489.447079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967262
> [  489.456574] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967261
> [  489.466069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967260
> [  489.475565] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967259
> [  491.235517] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967260
> [  491.235602] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967261
> [  498.153108] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967262
> [  498.156307] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967263
> [  530.332619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967262
> [  530.342110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967261
> [  530.351595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967260
> [  530.361082] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967259
> [  535.176774] __add_stripe_bio: md127: start ff2721beec8c2fa0(24985208424+8) 4294967260
> [  549.125326] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24985208424+8) 4294967259
> [  549.635782] __add_stripe_bio: md127: start ff2721beec8c2fa0(25521770024+8) 4294967261
> [  590.875593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967260
> [  590.885081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967259
> [  596.973863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967263
> [  596.973866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967262
> [  596.973869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967261
> [  596.973871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967260
> [  596.973881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967259
> [  596.974557] __add_stripe_bio: md127: start ff2721beec8c2fa0(26325099752+8) 4294967260
> [  637.646142] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26325099752+8) 4294967259
> [  641.292887] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032741096+8) 4294967260
> [  654.931195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032741096+8) 4294967259
> [  654.933295] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967260
> [  654.933570] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967261
> [  654.935967] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967262
> [  654.937411] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967263
> [  683.008873] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967264
> [  685.689494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967263
> [  685.689496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967262
> [  685.689498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967261
> [  685.689499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967260
> [  685.689501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967259
> [  685.690260] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967260
> [  685.692999] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967261
> [  685.693119] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967262
> [  685.693124] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967263
> [  685.693427] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967264
> [  685.693428] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967265
> [  685.693517] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967266
> [  685.693528] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967267
> [  713.684556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967266
> [  713.694044] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967265
> [  713.703539] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967264
> [  713.713026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967263
> [  713.722512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967262
> [  713.731996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967261
> [  713.741480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967260
> [  713.750962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967259
> [  715.765954] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967260
> [  715.766034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967261
> [  715.766278] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967262
> [  715.766305] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967263
> [  715.766468] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967264
> [  716.077253] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967265
> [  731.258391] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
> [  731.258401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967261
> [  731.258584] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967262
> [  731.258711] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967263
> [  731.260991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967264
> [  731.261318] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967265
> [  731.261513] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967266
> [  758.285428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967265
> [  758.294912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967264
> [  758.304396] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967263
> [  758.313881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967262
> [  758.323377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967261
> [  758.332875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967260
> [  758.342365] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
> [  758.922198] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
> [  780.668347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
> [  780.957247] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967260
> [  780.957393] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967261
> [  780.957440] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967262
> [  780.957616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967263
> [  780.957675] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967264
> [  780.957754] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967265
> [  790.623177] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967266
> [  828.374094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967265
> [  828.383581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967264
> [  828.393067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967263
> [  828.402553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967262
> [  828.412040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967261
> [  828.421525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967260
> [  828.431012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967259
> [  830.477927] __add_stripe_bio: md127: start ff2721beec8c2fa0(13690207080+8) 4294967260
> [  851.040449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(13690207080+8) 4294967259
> [  851.762678] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967260
> [  851.762837] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967261
> [  851.762948] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967262
> [  851.763032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967263
> [  851.763068] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967264
> [  851.763112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967265
> [  851.763202] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967266
> [  851.766405] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967267
> [  851.768763] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967266
> [  851.768766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967265
> [  851.768768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967264
> [  851.768770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967263
> [  851.768773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967262
> [  851.768775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967261
> [  851.768778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967260
> [  851.768780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967259
> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967261
> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967260
> [  880.058982] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967262
> [  880.059032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967263
> [  880.059090] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967264
> [  880.059140] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967265
> [  880.059317] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967266
> [  891.735497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967265
> [  891.744974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967264
> [  891.754455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967263
> [  891.763939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967262
> [  891.773422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967261
> [  891.782975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967260
> [  891.792469] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967259
> [  897.108788] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967260
> [  897.108789] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967261
> [  897.108813] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967262
> [  897.108823] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967263
> [  903.693112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967264
> [  904.663454] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967265
> [  906.906830] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967260
> [  906.908087] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967261
> [  906.908508] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967262
> [  906.910088] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967263
> [  906.912093] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967264
> [  906.912840] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967265
> [  906.914294] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967266
> [  906.914323] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967267
> [  906.914806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967266
> [  906.914808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967265
> [  906.914809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967264
> [  906.914810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967263
> [  906.914811] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967262
> [  906.914813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967261
> [  906.914815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967260
> [  906.914817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967259
> [  934.849642] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861773736+8) 4294967261
> [  934.854037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967260
> [  934.854040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967259
> [  934.855808] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967260
> [  934.855945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967261
> [  963.315203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967262
> [  963.315320] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967263
> [  963.315327] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967264
> [  963.315499] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967265
> [  982.866693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967264
> [  982.876178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967263
> [  982.885665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967262
> [  982.895158] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967261
> [  982.904644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967260
> [  982.914129] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967259
> [  990.121616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967260
> [  990.121662] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967261
> [  990.121768] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967262
> [  990.121828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967263
> [  990.121843] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967264
> [ 1013.206756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967263
> [ 1013.206757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967262
> [ 1013.206758] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967261
> [ 1013.206759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967260
> [ 1013.224363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967259
> [ 1032.134913] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967260
> [ 1032.134928] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967261
> [ 1032.135028] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967262
> [ 1032.135078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967263
> [ 1041.027196] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967264
> [ 1041.027321] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967265
> [ 1041.027485] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967266
> [ 1057.623365] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967267
> [ 1076.893035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967266
> [ 1076.902520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967265
> [ 1076.912004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967264
> [ 1076.921490] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967263
> [ 1076.930986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967262
> [ 1076.940475] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967261
> [ 1076.949962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967260
> [ 1076.959446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967259
> [ 1077.721459] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967260
> [ 1077.721615] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967261
> [ 1077.721706] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967262
> [ 1077.721739] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967263
> [ 1077.721765] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967264
> [ 1110.833257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967263
> [ 1110.842743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967262
> [ 1110.852225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967261
> [ 1110.861709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967260
> [ 1110.871194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967259
> [ 1112.052569] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967260
> [ 1112.052666] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967261
> [ 1112.052695] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967262
> [ 1112.052727] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967263
> [ 1112.052778] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967264
> [ 1112.053637] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967265
> [ 1112.053649] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967266
> [ 1173.829738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967265
> [ 1173.839223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967264
> [ 1173.848709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967263
> [ 1173.858195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967262
> [ 1173.867683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967261
> [ 1173.877167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967260
> [ 1173.886654] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967259
> [ 1176.428651] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967260
> [ 1176.428940] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967263
> [ 1176.428942] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967264
> [ 1176.428939] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967262
> [ 1176.428903] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967261
> [ 1176.429040] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967265
> [ 1191.700497] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967265
> [ 1191.704134] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967266
> [ 1191.704199] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967267
> [ 1191.705804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967266
> [ 1191.705808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967265
> [ 1191.705809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967264
> [ 1191.705812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967263
> [ 1191.705815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967262
> [ 1191.705817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967261
> [ 1191.705819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967260
> [ 1191.705821] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967259
> [ 1191.810863] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792488+8) 4294967260
> [ 1244.235788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27917293544+8) 4294967260
> [ 1309.535319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967264
> [ 1309.544810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967263
> [ 1309.554303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967262
> [ 1309.563787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967261
> [ 1309.573272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967260
> [ 1309.582759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967259
> [ 1314.950362] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967262
> [ 1314.950455] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967263
> [ 1314.950457] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967264
> [ 1314.950470] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967265
> [ 1345.736319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967264
> [ 1345.745804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967263
> [ 1345.755290] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967262
> [ 1345.764773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967261
> [ 1345.774264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967260
> [ 1345.783759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967259
> [ 1346.823135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28462541160+8) 4294967259
> [ 1346.824776] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967260
> [ 1346.824799] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967261
> [ 1346.824806] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967262
> [ 1346.824922] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967263
> [ 1346.825566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967264
> [ 1373.560546] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814568+8) 4294967260
> [ 1431.650090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861814568+8) 4294967259
> [ 1468.944088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967265
> [ 1468.953581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967264
> [ 1468.963067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967263
> [ 1468.972552] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967262
> [ 1468.982036] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967261
> [ 1468.991524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967260
> [ 1469.001009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967259
> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967261
> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967260
> [ 1474.904634] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967262
> [ 1474.904752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967263
> [ 1474.904798] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967264
> [ 1474.904805] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967265
> [ 1477.837716] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967266
> [ 1479.836591] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967260
> [ 1479.858896] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967261
> [ 1479.859238] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967262
> [ 1479.859525] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967263
> [ 1479.859669] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967264
> [ 1479.859897] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967265
> [ 1479.860071] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967266
> [ 1507.386887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967265
> [ 1507.396375] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967264
> [ 1507.405858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967263
> [ 1507.415343] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967262
> [ 1507.424831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967261
> [ 1507.434322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967260
> [ 1507.443826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967259
> [ 1569.056325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967264
> [ 1569.065837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967263
> [ 1569.075325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967262
> [ 1569.084816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967261
> [ 1569.094308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967260
> [ 1569.103801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967259
> [ 1571.985752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967260
> [ 1571.985858] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967261
> [ 1571.985888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967263
> [ 1571.985864] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967262
> [ 1571.985962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967264
> [ 1571.986450] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967265
> [ 1582.882338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967264
> [ 1582.882340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967263
> [ 1582.882342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967262
> [ 1582.882344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967261
> [ 1582.882345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967260
> [ 1582.882346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967259
> [ 1582.884560] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967260
> [ 1582.884860] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967261
> [ 1582.884880] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967262
> [ 1582.885034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967263
> [ 1582.885126] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967264
> [ 1582.885164] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967265
> [ 1675.519030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967264
> [ 1675.528518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967263
> [ 1675.538016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967262
> [ 1675.547513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967261
> [ 1675.556999] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967260
> [ 1675.566485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967259
> [ 1682.250399] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967260
> [ 1682.250639] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967261
> [ 1682.250690] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967262
> [ 1682.250718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967263
> [ 1682.250974] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967264
> [ 1682.251078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967265
> [ 1682.251306] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967266
> [ 1704.298207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967265
> [ 1704.298211] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967264
> [ 1704.298214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967263
> [ 1704.298216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967262
> [ 1704.298218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967261
> [ 1704.298220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967260
> [ 1704.298222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967259
> [ 1704.299566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967260
> [ 1704.299718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967261
> [ 1704.299758] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967262
> [ 1704.299834] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967263
> [ 1704.299888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967264
> [ 1704.304001] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967265
> [ 1704.304132] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967266
> [ 1772.283712] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967264
> [ 1772.293200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967263
> [ 1772.302685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967262
> [ 1772.312169] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967261
> [ 1772.321652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967260
> [ 1772.331135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967259
> [ 1776.549687] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967260
> [ 1776.549697] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967261
> [ 1776.549898] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967263
> [ 1776.549945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967264
> [ 1776.549962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967265
> [ 1776.549828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967262
> [ 1776.550033] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967266
> [ 1776.550080] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967267
> [ 1781.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967266
> [ 1782.080461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967265
> [ 1782.199404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967264
> [ 1782.318346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967263
> [ 1782.438150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967262
> [ 1782.557963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967261
> [ 1782.677762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967260
> [ 1782.797570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967259
> [ 1786.992892] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967261
> [ 1786.992878] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967260
> [ 1786.993259] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967262
> [ 1786.993401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967263
> [ 1786.993449] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967264
> [ 1795.858021] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967266
> [ 1795.858009] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967265
> [ 1795.858180] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967267
> [ 1805.164880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967266
> [ 1805.174370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967265
> [ 1805.183853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967264
> [ 1805.193339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967263
> [ 1805.202828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967262
> [ 1805.212314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967261
> [ 1805.221800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967260
> [ 1805.231291] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967259
> [ 1807.730968] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967261
> [ 1807.730937] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967260
> [ 1807.731203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967262
> [ 1807.731267] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967263
> [ 1807.731406] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967264
> [ 1807.731542] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967265
> [ 1807.731764] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967266
> [ 1893.800189] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967263
> [ 1893.809691] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967262
> [ 1893.819186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967261
> [ 1893.828675] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967260
> [ 1893.838165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967259
> [ 1897.304170] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967260
> [ 1897.304333] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967261
> [ 1897.304579] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967262
> [ 1897.304721] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967263
> [ 1897.304812] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967264
> [ 1897.304978] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967265
> [ 1910.883901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861873064+8) 4294967259
> [ 1910.888991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967262
> [ 1910.888995] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967264
> [ 1910.888988] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967261
> [ 1910.888993] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967263
> [ 1910.888986] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967260
> [ 1990.952649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967264
> [ 1990.952651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967263
> [ 1990.952653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967262
> [ 1990.952655] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967261
> [ 1990.952657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967260
> [ 1990.952659] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967259
> [ 1990.957010] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967260
> [ 1990.957011] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967261
> [ 1990.957016] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967262
> [ 1990.957020] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967263
> [ 2021.437780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967262
> [ 2021.437782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967261
> [ 2021.437783] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967260
> [ 2021.437785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967259
> [ 2021.442407] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967260
> [ 2021.443820] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967261
> [ 2045.539668] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967262
> [ 2045.540142] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967263
> [ 2045.540232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967264
> [ 2045.540262] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967265
> [ 2050.125201] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967266
> [ 2057.875279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967265
> [ 2057.884767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967264
> [ 2057.894262] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967263
> [ 2057.903753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967262
> [ 2057.913237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967261
> [ 2057.922722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967260
> [ 2057.932205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967259
> [ 2059.233074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967260
> [ 2059.233116] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967261
> [ 2059.233120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967262
> [ 2059.233171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967263
> [ 2059.233632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967264
> [ 2059.233684] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967265
> [ 2059.235328] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967266
> [ 2059.235336] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967267
> [ 2059.238433] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967266
> [ 2059.238435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967265
> [ 2059.238436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967264
> [ 2059.238437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967263
> [ 2059.238439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967262
> [ 2059.238440] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967261
> [ 2059.238441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967260
> [ 2059.238443] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967259
> [ 2090.648331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967260
> [ 2090.648399] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967261
> [ 2090.648402] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967262
> [ 2090.648414] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967263
> [ 2090.648428] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967264
> [ 2090.648540] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967265
> [ 2090.651017] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967266
> [ 2090.700177] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967267
> [ 2118.173167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967266
> [ 2118.182657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967265
> [ 2118.192147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967264
> [ 2118.201638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967263
> [ 2118.211138] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967262
> [ 2118.220626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967261
> [ 2118.230111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967260
> [ 2118.239602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967259
> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967260
> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967261
> [ 2119.232691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967262
> [ 2119.232707] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967263
> [ 2119.232880] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967264
> [ 2119.232926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967265
> [ 2119.232990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967266
> [ 2119.233054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967267
> [ 2146.417917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967266
> [ 2151.981747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967265
> [ 2152.121412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967264
> [ 2152.261047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967263
> [ 2152.400699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967262
> [ 2152.540356] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967261
> [ 2152.680005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967260
> [ 2152.819653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967259
> [ 2157.037069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967260
> [ 2157.037363] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967261
> [ 2157.037405] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967262
> [ 2157.037421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967263
> [ 2157.037446] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967264
> [ 2214.237201] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967263
> [ 2214.246685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967262
> [ 2214.256174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967261
> [ 2214.265665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967260
> [ 2214.275152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967259
> [ 2220.022835] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967260
> [ 2220.022859] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967261
> [ 2220.022876] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967262
> [ 2220.022912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967263
> [ 2220.023258] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967265
> [ 2220.023161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967264
> [ 2243.495792] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130495656+8) 4294967264
> [ 2272.045830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967265
> [ 2272.045833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967264
> [ 2272.045835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967263
> [ 2272.045837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967262
> [ 2272.045838] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967261
> [ 2272.045840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967260
> [ 2272.045841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967259
> [ 2302.557785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967266
> [ 2302.557787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967265
> [ 2302.557789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967264
> [ 2302.557791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967263
> [ 2302.557793] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967262
> [ 2302.557796] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967261
> [ 2302.557797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967260
> [ 2302.557799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967259
> [ 2302.561904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967260
> [ 2302.561926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967261
> [ 2302.561957] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967263
> [ 2302.561933] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967262
> [ 2302.562006] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967264
> [ 2302.562203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967265
> [ 2302.562232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967266
> [ 2302.562597] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967267
> [ 2329.647721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967266
> [ 2329.738196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967265
> [ 2329.828677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967264
> [ 2329.919153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967263
> [ 2330.009644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967262
> [ 2330.100125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967261
> [ 2330.190603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967260
> [ 2330.281085] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967259
> [ 2332.172188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967265
> [ 2332.172198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967264
> [ 2332.172233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967263
> [ 2332.172242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967262
> [ 2332.172255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967261
> [ 2332.172264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967260
> [ 2332.172278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967259
> [ 2332.178323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967262
> [ 2332.178317] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967261
> [ 2332.178310] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967260
> [ 2332.178326] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967263
> [ 2332.178394] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967264
> [ 2332.178580] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967265
> [ 2332.178600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967266
> [ 2332.178697] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967267
> [ 2358.527771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967266
> [ 2358.657096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967265
> [ 2358.786383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967264
> [ 2358.915693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967263
> [ 2359.044994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967262
> [ 2359.174296] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967261
> [ 2359.303592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967260
> [ 2359.432875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967259
> [ 2367.401519] __add_stripe_bio: md127: start ff2721beec8c2fa0(27111972904+8) 4294967260
> [ 2367.410065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27111972904+8) 4294967259
> [ 2399.790368] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967260
> [ 2403.855440] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967261
> [ 2403.855574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967262
> [ 2403.855636] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967263
> [ 2403.855687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967264
> [ 2478.513548] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967263
> [ 2478.523034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967262
> [ 2478.532518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967261
> [ 2478.542003] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967260
> [ 2478.551487] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967259
> [ 2483.294420] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967264
> [ 2483.294422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967263
> [ 2483.294423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967262
> [ 2483.294425] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967261
> [ 2483.294426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967260
> [ 2483.294428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967259
> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967260
> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967261
> [ 2515.139584] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967262
> [ 2515.139592] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967264
> [ 2515.139587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967263
> [ 2515.139593] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967265
> [ 2515.139795] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967266
> [ 2556.408113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967264
> [ 2556.417600] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967263
> [ 2556.427088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967262
> [ 2556.436578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967261
> [ 2556.446066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967260
> [ 2556.455551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967259
> [ 2559.815547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967260
> [ 2559.815563] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967261
> [ 2559.815793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967262
> [ 2559.815874] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967263
> [ 2559.816031] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967264
> [ 2568.256111] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967265
> [ 2568.256157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967266
> [ 2619.422458] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967265
> [ 2619.431942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967264
> [ 2619.441449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967263
> [ 2619.450948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967262
> [ 2619.460435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967261
> [ 2619.469921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967260
> [ 2619.479412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967259
> [ 2633.472211] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967261
> [ 2633.472205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967260
> [ 2633.472305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967262
> [ 2633.472427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967264
> [ 2633.472417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967263
> [ 2633.472587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967265
> [ 2633.539700] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967266
> [ 2661.223491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967265
> [ 2661.223493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967264
> [ 2661.223494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967263
> [ 2661.223496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967262
> [ 2661.223497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967261
> [ 2661.223498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967260
> [ 2661.223500] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967259
> [ 2661.228040] __add_stripe_bio: md127: start ff2721beec8c2fa0(539699176+8) 4294967260
> [ 2706.576782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(539699176+8) 4294967259
> [ 2709.937157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967260
> [ 2709.937171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967261
> [ 2709.937316] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967262
> [ 2709.937650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967263
> [ 2709.937717] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967264
> [ 2709.937724] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967265
> [ 2709.937725] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967266
> [ 2709.937737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967267
> [ 2721.462599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967266
> [ 2721.610877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967265
> [ 2721.759136] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967264
> [ 2721.907488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967263
> [ 2722.055771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967262
> [ 2722.204050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967261
> [ 2722.352331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967260
> [ 2722.500592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967259
> [ 2724.772625] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967260
> [ 2724.772751] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967261
> [ 2724.772938] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967262
> [ 2724.772988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967263
> [ 2724.773119] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967264
> [ 2754.673788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967263
> [ 2754.673790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967262
> [ 2754.673791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967261
> [ 2754.673794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967260
> [ 2754.673795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967259
> [ 2785.394053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967261
> [ 2785.394056] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967263
> [ 2785.394059] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967265
> [ 2785.394054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967262
> [ 2785.394058] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967264
> [ 2785.394050] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967260
> [ 2785.463401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967266
> [ 2785.472247] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967267
> [ 2787.076731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967266
> [ 2787.076732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967265
> [ 2787.076734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967264
> [ 2787.076735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967263
> [ 2787.076736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967262
> [ 2787.076738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967261
> [ 2787.076739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967260
> [ 2787.076740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967259
> [ 2808.905214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967265
> [ 2808.905333] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967266
> [ 2808.905388] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967267
> [ 2808.906939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967266
> [ 2808.906941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967265
> [ 2808.906943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967264
> [ 2808.906944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967263
> [ 2808.906946] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967262
> [ 2808.906947] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967261
> [ 2808.906950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967260
> [ 2808.906952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967259
> [ 2836.311276] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130552808+8) 4294967260
> [ 2854.798417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
> [ 2856.543067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967264
> [ 2856.543070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967263
> [ 2856.543073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967262
> [ 2856.543075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967261
> [ 2856.543077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967260
> [ 2856.543079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
> [ 2856.546312] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967260
> [ 2856.546314] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967261
> [ 2856.546421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967262
> [ 2856.546509] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967263
> [ 2856.546926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967264
> [ 2886.489550] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967265
> [ 2886.489595] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967266
> [ 2886.489713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967267
> [ 2897.617989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967266
> [ 2897.627477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967265
> [ 2897.636962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967264
> [ 2897.646444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967263
> [ 2897.655920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967262
> [ 2897.665409] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967261
> [ 2897.674910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967260
> [ 2897.684404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967259
> [ 2899.844282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967266
> [ 2899.844316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967265
> [ 2899.844354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967264
> [ 2899.844382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967263
> [ 2899.844423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967262
> [ 2899.844462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967261
> [ 2899.844516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967260
> [ 2899.844570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967259
> [ 2899.845690] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967260
> [ 2899.845966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967261
> [ 2899.846019] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967262
> [ 2899.846062] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967263
> [ 2899.846186] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967264
> [ 2899.846207] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967265
> [ 2899.846260] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967266
> [ 2952.891498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967265
> [ 2952.900984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967264
> [ 2952.910478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967263
> [ 2952.919966] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967262
> [ 2952.929461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967261
> [ 2952.938950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967260
> [ 2952.948431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967259
> [ 2955.316494] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967260
> [ 2955.316704] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967261
> [ 2955.316809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967262
> [ 2955.316988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967263
> [ 2955.317105] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967264
> [ 2987.714377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967263
> [ 2987.714379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967262
> [ 2987.714381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967261
> [ 2987.714383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967260
> [ 2987.714385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967259
> [ 2987.719137] __add_stripe_bio: md127: start ff2721beec8c2fa0(1092459240+8) 4294967260
> [ 3047.110275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967264
> [ 3047.110276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967263
> [ 3047.110277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967262
> [ 3047.110278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967261
> [ 3047.110279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967260
> [ 3047.110281] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967259
> [ 3047.112501] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967260
> [ 3047.112711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967261
> [ 3047.112750] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967262
> [ 3070.186991] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967263
> [ 3070.187120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967264
> [ 3110.127145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967263
> [ 3110.136633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967262
> [ 3110.146121] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967261
> [ 3110.155611] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967260
> [ 3110.165103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967259
> [ 3113.362512] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967260
> [ 3113.362527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967261
> [ 3113.362737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967262
> [ 3113.362788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967264
> [ 3113.362772] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967263
> [ 3113.363524] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967265
> [ 3181.040787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967264
> [ 3181.050278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967263
> [ 3181.059767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967262
> [ 3181.069267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967261
> [ 3181.078760] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967260
> [ 3181.088248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967259
> [ 3190.006331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967260
> [ 3190.006353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967261
> [ 3190.006523] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967262
> [ 3190.006526] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967263
> [ 3190.006576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967264
> [ 3190.006604] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967265
> [ 3190.006676] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967266
> [ 3222.489157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130590248+8) 4294967259
> [ 3222.494665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967260
> [ 3222.494810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967261
> [ 3222.495400] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967262
> [ 3222.495460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967263
> [ 3222.496203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967264
> [ 3222.496266] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967265
> [ 3249.542100] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967264
> [ 3249.542102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967263
> [ 3249.542103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967262
> [ 3249.542105] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967261
> [ 3249.542107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967260
> [ 3249.542109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967259
> [ 3249.547575] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130590440+8) 4294967260
> [ 3298.070385] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967260
> [ 3298.070466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967261
> [ 3298.070767] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967262
> [ 3298.070824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967263
> [ 3298.070896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967264
> [ 3351.191989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967263
> [ 3351.201478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967262
> [ 3351.210961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967261
> [ 3351.220447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967260
> [ 3351.229931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967259
> [ 3354.186090] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967260
> [ 3354.186174] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967261
> [ 3354.186453] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967262
> [ 3354.186600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967263
> [ 3354.186610] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967264
> [ 3354.186666] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967265
> [ 3354.186682] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967266
> [ 3395.962921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967265
> [ 3395.972408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967264
> [ 3395.981893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967263
> [ 3395.991379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967262
> [ 3396.000863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967261
> [ 3396.010344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967260
> [ 3396.019828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967259
> [ 3397.783940] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967260
> [ 3397.783984] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967261
> [ 3397.784015] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967262
> [ 3397.784039] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967263
> [ 3397.784102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967264
> [ 3397.784112] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967265
> [ 3397.784206] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967266
> [ 3397.784239] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967267
> [ 3407.297456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967266
> [ 3407.515478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967265
> [ 3407.733622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967264
> [ 3407.952516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967263
> [ 3408.171430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967262
> [ 3408.390355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967261
> [ 3408.609270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967260
> [ 3408.828226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967259
> [ 3410.118755] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967260
> [ 3410.118912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967261
> [ 3410.119044] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967262
> [ 3410.119183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967263
> [ 3410.119190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967264
> [ 3410.119398] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967265
> [ 3474.070919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967264
> [ 3474.080400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967263
> [ 3474.089879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967262
> [ 3474.099370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967261
> [ 3474.108856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967260
> [ 3474.118342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967259
> [ 3477.161290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967261
> [ 3477.161249] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967260
> [ 3477.161478] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967262
> [ 3477.161505] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967263
> [ 3487.704510] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967264
> [ 3487.704567] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967265
> [ 3510.992084] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967264
> [ 3510.992086] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967263
> [ 3510.992088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967262
> [ 3510.992089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967261
> [ 3510.992090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967260
> [ 3510.992091] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967259
> [ 3510.992993] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
> [ 3510.993007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
> [ 3550.083557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
> [ 3550.083561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
> [ 3555.089305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
> [ 3555.089386] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
> [ 3555.089618] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967262
> [ 3555.089635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967263
> [ 3555.089655] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967264
> [ 3572.781054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967263
> [ 3572.790547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967262
> [ 3572.800034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967261
> [ 3572.809523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
> [ 3572.819005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
> [ 3589.172647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967266
> [ 3589.172750] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967265
> [ 3589.172860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967264
> [ 3589.172991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967263
> [ 3589.173151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967262
> [ 3589.173314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967261
> [ 3589.173391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967260
> [ 3589.173461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967259
> [ 3589.175972] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743848+8) 4294967260
> [ 3621.769395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032743848+8) 4294967259
> [ 3623.304014] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399975272+8) 4294967260
> [ 3686.974958] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399986216+8) 4294967267
> [ 3722.135513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967266
> [ 3722.144998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967265
> [ 3722.154484] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967264
> [ 3722.163972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967263
> [ 3722.173455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967262
> [ 3722.182939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967261
> [ 3722.192426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967260
> [ 3722.201912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967259
> [ 3729.633922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967260
> [ 3729.634199] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967261
> [ 3729.634228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967262
> [ 3729.634351] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967263
> [ 3729.634466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967264
> [ 3737.013926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967265
> [ 3737.016635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967266
> [ 3761.542817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967265
> [ 3761.542819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967264
> [ 3761.542820] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967263
> [ 3761.542822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967262
> [ 3761.542824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967261
> [ 3761.542826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967260
> [ 3761.542827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967259
> [ 3761.545145] __add_stripe_bio: md127: start ff2721beec8c2fa0(4298178536+8) 4294967260
> [ 3781.220916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(4298178536+8) 4294967259
> [ 3816.363850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967261
> [ 3816.363852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967260
> [ 3816.363853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967259
> [ 3816.366775] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967260
> [ 3816.367295] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967261
> [ 3816.367301] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967262
> [ 3816.367544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967263
> [ 3816.367693] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967264
> [ 3816.368092] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967265
> [ 3843.302810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967266
> [ 3869.720089] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967261
> [ 3869.720097] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967262
> [ 3869.720194] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967263
> [ 3869.720213] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967264
> [ 3869.725214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967265
> [ 3911.191478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967264
> [ 3911.200970] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967263
> [ 3911.210456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967262
> [ 3911.219942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967261
> [ 3911.229426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967260
> [ 3911.238911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967259
> [ 3914.293028] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967264
> [ 3914.293031] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967263
> [ 3914.293033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967262
> [ 3914.293035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967261
> [ 3914.293038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967260
> [ 3914.293040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967259
> [ 3914.295622] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967261
> [ 3914.295641] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967262
> [ 3914.295643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967263
> [ 3914.295621] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967260
> [ 3914.295871] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967264
> [ 3999.621383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967263
> [ 3999.630885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967262
> [ 3999.640370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967261
> [ 3999.649865] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967260
> [ 3999.659351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967259
> [ 4004.391868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967260
> [ 4004.391913] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967261
> [ 4004.392117] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967262
> [ 4004.392564] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967263
> [ 4004.392579] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967264
> [ 4004.392650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967265
> [ 4004.392858] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967266
> [ 4074.054694] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967265
> [ 4074.064183] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967264
> [ 4074.073668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967263
> [ 4074.083155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967262
> [ 4074.092647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967261
> [ 4074.102149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967260
> [ 4074.111642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967259
> [ 4114.890444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967265
> [ 4114.890445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967264
> [ 4114.890446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967263
> [ 4114.890447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967262
> [ 4114.890448] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967261
> [ 4114.890449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967260
> [ 4114.890450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967259
> [ 4114.894014] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743784+8) 4294967260
> [ 4137.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967260
> [ 4137.422134] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967261
> [ 4137.422222] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967262
> [ 4137.422380] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967263
> [ 4137.422447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967264
> [ 4137.422527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967265
> [ 4137.422809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967266
> [ 4195.441417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967265
> [ 4195.450902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967264
> [ 4195.460386] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967263
> [ 4195.469873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967262
> [ 4195.479360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967261
> [ 4195.488848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967260
> [ 4195.498336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967259
> [ 4205.104432] __add_stripe_bio: md127: start ff2721beec8c2fa0(6458132264+8) 4294967260
> [ 4270.444837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6458132264+8) 4294967259
> [ 4282.274860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967265
> [ 4282.274879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967264
> [ 4282.274897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967263
> [ 4282.274916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967262
> [ 4282.274936] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967261
> [ 4282.274955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967260
> [ 4282.274975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967259
> [ 4282.276460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967260
> [ 4282.276797] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967261
> [ 4282.276964] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967262
> [ 4282.277061] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967263
> [ 4282.277143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967264
> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967266
> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967265
> [ 4282.277271] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967267
> [ 4282.321155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967266
> [ 4282.387435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967265
> [ 4282.448733] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967264
> [ 4282.448742] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967263
> [ 4282.448751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967262
> [ 4282.448759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967261
> [ 4282.448767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967260
> [ 4282.448775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967259
> [ 4315.061841] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967260
> [ 4315.061855] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967262
> [ 4315.061844] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967261
> [ 4315.061924] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967263
> [ 4315.061976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967264
> [ 4315.063503] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967265
> [ 4382.511212] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967264
> [ 4382.520702] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967263
> [ 4382.530180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967262
> [ 4382.539665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967261
> [ 4382.549163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967260
> [ 4382.558657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967259
> [ 4387.176732] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967260
> [ 4387.176821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967261
> [ 4387.176898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967262
> [ 4387.177030] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967263
> [ 4387.177229] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967264
> [ 4387.177270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967265
> [ 4457.957710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967264
> [ 4457.957715] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967263
> [ 4457.957719] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967262
> [ 4457.957723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967261
> [ 4457.957727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967260
> [ 4457.957731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967259
> [ 4457.961270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967260
> [ 4457.961406] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967261
> [ 4457.961619] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967262
> [ 4457.961651] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967264
> [ 4457.961806] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967265
> [ 4457.961645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967263
> [ 4485.589613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967264
> [ 4485.589615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967263
> [ 4485.589616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967262
> [ 4485.589617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967261
> [ 4485.589618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967260
> [ 4485.589619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967259
> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967260
> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967261
> [ 4485.593288] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967262
> [ 4485.593416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967263
> [ 4485.593532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967264
> [ 4485.593652] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967265
> [ 4485.593678] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967266
> [ 4485.850480] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967267
> [ 4515.537222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967266
> [ 4515.537223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967265
> [ 4515.537224] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967264
> [ 4515.537226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967263
> [ 4515.537227] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967262
> [ 4515.537228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967261
> [ 4515.537229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967260
> [ 4515.537231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967259
> [ 4515.539155] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967260
> [ 4515.539253] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967261
> [ 4515.539324] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967262
> [ 4515.539513] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967263
> [ 4515.539522] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967264
> [ 4543.939187] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967265
> [ 4567.298898] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967264
> [ 4567.308382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967263
> [ 4567.317870] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967262
> [ 4567.327355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967261
> [ 4567.336851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967260
> [ 4567.346342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967259
> [ 4574.769978] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967260
> [ 4574.770644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967261
> [ 4574.770713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967262
> [ 4585.659234] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967263
> [ 4585.659638] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967264
> [ 4585.659851] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967265
> [ 4628.062519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967264
> [ 4628.062521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967263
> [ 4628.062522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967262
> [ 4628.062524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967261
> [ 4628.062525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967260
> [ 4628.062526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967259
> [ 4628.067547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967260
> [ 4628.067553] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967262
> [ 4628.067549] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967261
> [ 4628.067556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967263
> [ 4628.067558] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967264
> [ 4628.067643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967265
> [ 4628.067650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967266
> [ 4655.735972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967266
> [ 4655.738016] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967267
> [ 4655.740269] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967266
> [ 4655.740270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967265
> [ 4655.740271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967264
> [ 4655.740273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967263
> [ 4655.740274] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967262
> [ 4655.740275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967261
> [ 4655.740277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967260
> [ 4655.740278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967259
> [ 4655.744826] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967260
> [ 4655.745042] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967261
> [ 4655.745074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967262
> [ 4655.745162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967263
> [ 4684.693786] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967264
> [ 4707.657198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967263
> [ 4707.666685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967262
> [ 4707.676172] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967261
> [ 4707.685664] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967260
> [ 4707.695155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967259
> [ 4714.104353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967260
> [ 4714.104370] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967261
> [ 4714.104532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967262
> [ 4714.104556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967263
> [ 4714.104738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967264
> [ 4714.104749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967265
> [ 4714.104870] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967266
> [ 4767.415146] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967265
> [ 4767.424642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967264
> [ 4767.434126] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967263
> [ 4767.443610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967262
> [ 4767.453092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967261
> [ 4767.462576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967260
> [ 4767.472063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967259
> [ 4772.506161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967260
> [ 4772.506215] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967261
> [ 4772.506341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967262
> [ 4772.506665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967263
> [ 4772.506877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967264
> [ 4772.507010] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967265
> [ 4788.406891] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967266
> [ 4841.522571] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967260
> [ 4841.523093] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967261
> [ 4907.596094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967260
> [ 4907.596096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967259
> [ 4907.596899] __add_stripe_bio: md127: start ff2721beec8c2fa0(27651747176+8) 4294967260
> [ 4973.644083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27651747176+8) 4294967259
> [ 4983.401423] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967260
> [ 4983.401434] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967261
> [ 4983.401439] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967262
> [ 4983.412449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
> [ 4983.412450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
> [ 4983.412452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
> [ 5009.844830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967263
> [ 5009.844831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967262
> [ 5009.844832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
> [ 5009.844834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
> [ 5009.844835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
> [ 5036.841498] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967260
> [ 5036.841516] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967261
> [ 5036.841589] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967262
> [ 5036.841711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967263
> [ 5036.841866] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967264
> [ 5036.842021] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967265
> [ 5065.748527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967264
> [ 5065.758011] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967263
> [ 5065.767510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967262
> [ 5065.776985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967261
> [ 5065.786473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967260
> [ 5065.795964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967259
> [ 5069.198341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967260
> [ 5069.198447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967261
> [ 5069.198475] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967262
> [ 5069.198565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967263
> [ 5069.198600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967264
> [ 5069.198657] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967265
> [ 5069.198719] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967266
> [ 5069.198749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967267
> [ 5158.739304] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967265
> [ 5158.748804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967264
> [ 5158.758295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967263
> [ 5158.767784] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967262
> [ 5158.777267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967261
> [ 5158.786749] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967260
> [ 5158.796231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967259
> [ 5174.398776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967260
> [ 5174.398877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967261
> [ 5174.398898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967262
> [ 5174.398990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967263
> [ 5174.399068] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967264
> [ 5174.399165] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967265
> [ 5215.123596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967264
> [ 5215.133088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967263
> [ 5215.142587] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967262
> [ 5215.152066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967261
> [ 5215.161549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967260
> [ 5215.171035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967259
> [ 5223.954743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967260
> [ 5223.954779] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967261
> [ 5223.954951] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967262
> [ 5223.955007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967263
> [ 5223.955223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967264
> [ 5223.955228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967265
> [ 5223.955230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967266
> [ 5223.955472] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967267
> [ 5259.738014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967266
> [ 5260.031050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967265
> [ 5260.324115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967264
> [ 5260.617186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967263
> [ 5260.910270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967262
> [ 5261.203354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967261
> [ 5261.496406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967260
> [ 5261.789480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967259
> [ 5265.172862] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967260
> [ 5265.173244] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967261
> [ 5265.173322] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967262
> [ 5265.173763] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967263
> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967265
> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967264
> [ 5265.173928] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967266
> [ 5265.173973] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967267
> [ 5294.960280] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967266
> [ 5295.249813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967265
> [ 5295.539340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967264
> [ 5295.829752] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967263
> [ 5296.120184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967262
> [ 5296.410623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967261
> [ 5296.701004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967260
> [ 5296.991418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967259
> [ 5298.908411] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967260
> [ 5298.908458] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967261
> [ 5298.908544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967262
> [ 5298.908650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967263
> [ 5298.908710] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967264
> [ 5298.909051] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967265
> [ 5413.856013] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667058856+8) 4294967266
> [ 5446.671677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967265
> [ 5446.671679] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967264
> [ 5446.671680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967263
> [ 5446.671681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967262
> [ 5446.671682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967261
> [ 5446.671683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967260
> [ 5446.671684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967259
> [ 5479.015532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967260
> [ 5479.015632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967261
> [ 5479.015683] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967262
> [ 5479.015735] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967263
> [ 5492.731793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967264
> [ 5492.731911] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967265
> [ 5506.007986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967264
> [ 5506.007988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967263
> [ 5506.007991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967262
> [ 5506.007994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967261
> [ 5506.007997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967260
> [ 5506.008000] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967259
> [ 5506.011738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967260
> [ 5506.011854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967261
> [ 5506.011861] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967262
> [ 5506.012133] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967263
> [ 5506.012143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967264
> [ 5555.890832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967263
> [ 5555.900322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967262
> [ 5555.909852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967261
> [ 5555.919336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967260
> [ 5555.928823] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967259
> [ 5574.002280] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967260
> [ 5574.002313] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967261
> [ 5574.002403] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967262
> [ 5574.002468] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967263
> [ 5574.002561] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967264
> [ 5574.002645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967265
> [ 5606.796975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967264
> [ 5606.796977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967263
> [ 5606.796978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967262
> [ 5606.796979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967261
> [ 5606.796981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967260
> [ 5606.796982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967259
> [ 5606.798208] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967260
> [ 5606.798527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967261
> [ 5606.798585] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967262
> [ 5606.798607] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967263
> [ 5606.803857] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967264
> [ 5606.804282] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967265
> [ 5639.962722] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967266
> [ 5652.645345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967265
> [ 5652.654833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967264
> [ 5652.664323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967263
> [ 5652.673815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967262
> [ 5652.683294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967261
> [ 5652.692781] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967260
> [ 5654.603101] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967259
> [ 5654.613230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967260
> [ 5654.613572] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967261
> [ 5654.613687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967262
> [ 5654.613814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967263
> [ 5654.614055] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967264
> [ 5683.045381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967263
> [ 5683.045383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967262
> [ 5683.045385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967261
> [ 5683.045387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967260
> [ 5683.045388] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967259
> [ 5683.048586] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967260
> [ 5683.048965] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967261
> [ 5683.049073] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967262
> [ 5683.049140] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967263
> [ 5683.049162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967264
> [ 5683.049196] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967265
> [ 5683.049256] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967266
> [ 5683.474474] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967267
> [ 5723.855633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967266
> [ 5724.027114] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967265
> [ 5724.198621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967264
> [ 5724.370117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967263
> [ 5724.541614] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967262
> [ 5724.713065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967261
> [ 5724.884518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967260
> [ 5725.055989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967259
> [ 5730.407790] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967260
> [ 5730.407821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967261
> [ 5730.407937] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967263
> [ 5730.407896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967262
> [ 5730.408159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967265
> [ 5730.408143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967264
> [ 5730.408353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967266
> [ 5758.122868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967260
> [ 5758.123578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967261
> [ 5758.123627] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967262
> [ 5758.130240] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967263
> [ 5758.130420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967264
> [ 5758.130534] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967265
> [ 5846.663594] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967264
> [ 5846.673081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967263
> [ 5846.682568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967262
> [ 5846.692061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967261
> [ 5846.701549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967260
> [ 5846.711038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967259
> [ 5911.098887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967265
> [ 5911.108378] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967264
> [ 5911.117873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967263
> [ 5911.127372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967262
> [ 5911.136857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967261
> [ 5911.146341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967260
> [ 5911.155824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967259
> [ 5928.422955] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967260
> [ 5928.422960] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967261
> [ 5928.422966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967262
> [ 5928.422974] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967264
> [ 5928.422972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967263
> [ 6014.681387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967265
> [ 6014.690871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967264
> [ 6014.700360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967263
> [ 6014.709859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967262
> [ 6014.719355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967261
> [ 6014.728844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967260
> [ 6014.738332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967259
> [ 6056.379708] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967265
> [ 6056.389200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967264
> [ 6056.398687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967263
> [ 6056.408178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967262
> [ 6056.417667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967261
> [ 6056.427155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967260
> [ 6056.436640] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967259
> [ 6063.789506] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967260
> [ 6063.789738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967261
> [ 6063.789989] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967262
> [ 6063.790034] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967263
> [ 6063.790190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967264
> [ 6063.790287] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967265
> [ 6078.230963] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967266
> [ 6105.353125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967265
> [ 6105.362612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967264
> [ 6105.372093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967263
> [ 6105.381577] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967262
> [ 6105.391064] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967261
> [ 6105.400555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967260
> [ 6105.410041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967259
> [ 6128.587354] __add_stripe_bio: md127: start ff2721beec8c2fa0(27920223080+8) 4294967260
> [ 6201.907683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27920223080+8) 4294967259
> [ 6210.113728] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032744680+8) 4294967259
> [ 6283.753223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967264
> [ 6283.762710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967263
> [ 6283.772194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967262
> [ 6283.781677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967261
> [ 6283.791163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967260
> [ 6283.800647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967259
> [ 6294.185205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967260
> [ 6294.185349] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967261
> [ 6354.564956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967266
> [ 6354.565008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967265
> [ 6354.565050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967264
> [ 6354.565103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967263
> [ 6354.565143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967262
> [ 6354.565198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967261
> [ 6354.565250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967260
> [ 6354.565295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967259
> [ 6354.571582] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967260
> [ 6354.571613] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967261
> [ 6354.571614] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967262
> [ 6354.572095] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967263
> [ 6381.572101] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967264
> [ 6381.572150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967265
> [ 6381.572462] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967266
> [ 6417.668789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967265
> [ 6417.678285] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967264
> [ 6417.687773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967263
> [ 6417.697266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967262
> [ 6417.706822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967261
> [ 6417.716318] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967260
> [ 6417.725807] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967259
> [ 6442.242691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967260
> [ 6442.242776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967261
> [ 6442.242901] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967262
> [ 6442.242998] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967263
> [ 6442.243060] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967264
> [ 6442.243109] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967265
> [ 6487.368252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967264
> [ 6487.368256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967263
> [ 6487.384984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967262
> [ 6487.401709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967261
> [ 6487.418441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967260
> [ 6487.418447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967259
> [ 6512.350543] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967260
> [ 6512.351290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967261
> [ 6512.351395] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967262
> [ 6512.351419] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967263
> [ 6512.351565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967264
> [ 6512.351578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967265
> [ 6512.351611] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967266
> [ 6558.339111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967265
> [ 6558.339113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967264
> [ 6558.339115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967263
> [ 6558.339118] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967262
> [ 6558.339120] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967261
> [ 6558.339122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967260
> [ 6558.339123] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967259
> [ 6604.012612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967265
> [ 6604.012615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967264
> [ 6604.012617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967263
> [ 6604.012619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967262
> [ 6604.012622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967261
> [ 6604.012624] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967260
> [ 6604.012626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967259
> [ 6636.116612] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967260
> [ 6636.117018] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967261
> [ 6636.117064] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967262
> [ 6636.117191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967263
> [ 6636.117217] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967265
> [ 6636.117204] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967264
> [ 6636.117365] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967266
> [ 6697.656762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967265
> [ 6697.666250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967264
> [ 6697.675731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967263
> [ 6697.685213] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967262
> [ 6697.694703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967261
> [ 6697.704188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967260
> [ 6697.713685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967259
> [ 6699.748818] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967260
> [ 6699.749045] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967261
> [ 6699.749350] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967262
> [ 6699.749488] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967264
> [ 6699.749487] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967263
> [ 6699.749673] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967265
> [ 6700.169570] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967266
> [ 6714.982644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967264
> [ 6714.982749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967265
> [ 6752.225916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967264
> [ 6752.235410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967263
> [ 6752.244901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967262
> [ 6752.254387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967261
> [ 6752.263875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967260
> [ 6752.273361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967259
> [ 6763.509990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967260
> [ 6763.510135] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967261
> [ 6763.510150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967262
> [ 6763.510183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967263
> [ 6763.510242] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967264
> [ 6763.510270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967265
> [ 6763.512906] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967266
> [ 6823.022727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967265
> [ 6823.022730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967264
> [ 6823.022731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967263
> [ 6823.022734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967262
> [ 6823.022735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967261
> [ 6823.022736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967260
> [ 6823.022738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967259
> [ 6823.024701] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967260
> [ 6823.024824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967261
> [ 6823.025069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967263
> [ 6823.024976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967262
> [ 6823.025323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967264
> [ 6823.025427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967265
> [ 6929.234367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967264
> [ 6929.243863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967263
> [ 6929.253358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967262
> [ 6929.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967261
> [ 6929.272333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967260
> [ 6929.281822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967259
> [ 6930.403685] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967260
> [ 6930.403904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967261
> [ 6930.404088] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967262
> [ 6930.404223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967263
> [ 6930.404286] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967264
> [ 6930.404292] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967265
> [ 6994.814514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967264
> [ 6994.824001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967263
> [ 6994.833494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967262
> [ 6994.842983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967261
> [ 6994.852473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967260
> [ 6994.861960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967259
> [ 6997.854357] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935572712+8) 4294967265
> [ 7031.426286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967264
> [ 7031.435774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967263
> [ 7031.452511] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967262
> [ 7031.468341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967261
> [ 7031.484182] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967260
> [ 7039.434351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967259
> [ 7045.236931] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967260
> [ 7045.237482] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967261
> [ 7045.237696] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967262
> [ 7045.237743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967263
> [ 7056.937353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967264
> [ 7056.937578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967265
> [ 7056.940551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967264
> [ 7056.940553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967263
> [ 7056.940554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967262
> [ 7056.940555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967261
> [ 7056.940556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967260
> [ 7056.940557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967259
> [ 7083.864814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967260
> [ 7083.865053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967262
> [ 7083.865036] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967261
> [ 7083.865102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967263
> [ 7083.865159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967264
> [ 7083.964009] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967265
> [ 7095.497485] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967266
> [ 7155.158072] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967265
> [ 7155.158073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967264
> [ 7155.158074] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967263
> [ 7155.158076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967262
> [ 7155.158077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967261
> [ 7155.158078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967260
> [ 7155.158079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967259
> [ 7155.165525] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967260
> [ 7180.285881] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967261
> [ 7183.167275] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967262
> [ 7183.414146] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967263
> [ 7224.653276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967265
> [ 7224.662765] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967264
> [ 7224.672249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967263
> [ 7224.681736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967262
> [ 7224.691228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967261
> [ 7224.700720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967260
> [ 7224.710207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967259
> [ 7229.399854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967260
> [ 7229.399922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967261
> [ 7229.400041] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967262
> [ 7229.400099] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967263
> [ 7229.400157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967264
> [ 7229.400221] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967265
> [ 7288.006416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967260
> [ 7288.006417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967261
> [ 7288.006420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967262
> [ 7288.006422] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967263
> [ 7288.006605] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967264
> [ 7288.006752] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967265
> [ 7288.006975] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967266
> [ 7353.182856] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967261
> [ 7353.182854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967260
> [ 7353.182949] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967262
> [ 7353.183001] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967263
> [ 7353.183401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967264
> [ 7353.183737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967266
> [ 7353.183726] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967265
> [ 7353.184047] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967267
> [ 7443.628841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967264
> [ 7443.638347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967263
> [ 7443.647826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967262
> [ 7443.657311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967261
> [ 7443.666797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967260
> [ 7443.676282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967259
> [ 7501.172830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967263
> [ 7501.182322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967262
> [ 7501.191809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967261
> [ 7501.201294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967260
> [ 7501.210778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967259
> [ 7508.208830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967260
> [ 7508.209597] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967261
> [ 7508.209670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967262
> [ 7522.177756] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967263
> [ 7522.177879] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967264
> [ 7522.177881] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967265
> [ 7550.776037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967264
> [ 7550.785525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967263
> [ 7550.795016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967262
> [ 7550.804501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967261
> [ 7550.813985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967260
> [ 7550.823470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967259
> [ 7556.140566] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967260
> [ 7556.140598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967261
> [ 7556.140739] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967262
> [ 7556.140798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967263
> [ 7556.140931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967264
> [ 7556.141063] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967265
> [ 7556.141111] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967266
> [ 7556.141212] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967267
> [ 7589.706135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967266
> [ 7589.991286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967265
> [ 7590.277340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967264
> [ 7590.563347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967263
> [ 7590.849389] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967262
> [ 7591.135445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967261
> [ 7591.421473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967260
> [ 7591.707517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967259
> [ 7606.172838] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204191720+8) 4294967260
> [ 7703.615017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967263
> [ 7703.624510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967262
> [ 7703.634001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967261
> [ 7703.643491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967260
> [ 7703.652977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967259
> [ 7708.933190] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967260
> [ 7708.933333] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967261
> [ 7708.933473] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967262
> [ 7708.933618] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967263
> [ 7708.933620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967264
> [ 7708.933657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967265
> [ 7708.933663] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967266
> [ 7758.303406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967265
> [ 7758.303407] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967264
> [ 7758.303408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967263
> [ 7758.303410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967262
> [ 7758.303411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967261
> [ 7758.303412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967260
> [ 7758.303413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967259
> [ 7778.439143] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967260
> [ 7778.439197] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967261
> [ 7778.439279] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967262
> [ 7778.439376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967263
> [ 7778.439409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967264
> [ 7778.439494] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967265
> [ 7858.899558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967264
> [ 7858.899559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967263
> [ 7858.899561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967262
> [ 7858.899562] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967261
> [ 7858.899563] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967260
> [ 7858.899564] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967259
> [ 7890.583124] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967260
> [ 7890.583147] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967261
> [ 7890.583594] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967262
> [ 7890.583650] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967263
> [ 7890.584141] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967264
> [ 7890.584215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967265
> [ 7890.584351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967266
> [ 7952.730165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967265
> [ 7952.739650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967264
> [ 7952.749137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967263
> [ 7952.758627] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967262
> [ 7952.768110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967261
> [ 7952.777595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967260
> [ 7952.787077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967259
> [ 7966.676635] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967260
> [ 7966.676647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967261
> [ 7966.676670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967263
> [ 7966.676658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967262
> [ 7966.676686] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967264
> [ 7966.677634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967265
> [ 7966.708979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967266
> [ 8032.243861] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967265
> [ 8032.253352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967264
> [ 8032.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967263
> [ 8032.272336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967262
> [ 8032.281826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967261
> [ 8032.291317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967260
> [ 8032.300800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967259
> [ 8043.901514] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967260
> [ 8043.901516] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967261
> [ 8043.901563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967262
> [ 8043.901612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967264
> [ 8043.901609] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967263
> [ 8043.907672] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967265
> [ 8146.468217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967264
> [ 8146.477717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967263
> [ 8146.487204] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967262
> [ 8146.496692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967261
> [ 8146.506180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967260
> [ 8146.515671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967259
> [ 8151.492003] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967260
> [ 8151.492121] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967261
> [ 8151.492457] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967262
> [ 8151.492601] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967264
> [ 8151.492590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967263
> [ 8151.492750] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967265
> [ 8164.821795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967266
> [ 8200.519377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967260
> [ 8200.519505] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967261
> [ 8200.519782] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967262
> [ 8200.519805] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967263
> [ 8200.520020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967264
> [ 8200.520247] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967265
> [ 8200.520434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967266
> [ 8200.520558] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967267
> [ 8231.475052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967266
> [ 8231.637009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967265
> [ 8231.799001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967264
> [ 8231.960972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967263
> [ 8232.122948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967262
> [ 8232.284905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967261
> [ 8232.446896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967260
> [ 8232.608893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967259
> [ 8251.913368] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967260
> [ 8251.913382] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967263
> [ 8251.913377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967261
> [ 8251.913388] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967265
> [ 8251.913387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967264
> [ 8251.913379] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967262
> [ 8302.630262] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967261
> [ 8302.630318] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967263
> [ 8302.630275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967262
> [ 8302.630250] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967260
> [ 8302.630745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967264
> [ 8377.488095] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967263
> [ 8377.497581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967262
> [ 8377.507068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967261
> [ 8377.516554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967260
> [ 8377.526049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967259
> [ 8382.868830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967260
> [ 8382.868911] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967261
> [ 8382.869109] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967262
> [ 8382.869323] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967263
> [ 8382.983582] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967264
> [ 8407.895450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967263
> [ 8407.895452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967262
> [ 8407.895455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967261
> [ 8407.895457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967260
> [ 8407.895459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967259
> [ 8454.553018] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967264
> [ 8454.570223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967263
> [ 8454.587423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967262
> [ 8454.587426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967261
> [ 8454.604638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967260
> [ 8454.621845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967259
> [ 8498.349228] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967261
> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967262
> [ 8498.349217] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967260
> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967263
> [ 8498.349317] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967264
> [ 8498.349366] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967265
> [ 8517.904808] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967266
> [ 8551.340824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967265
> [ 8551.350319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967264
> [ 8551.359809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967263
> [ 8551.369300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967262
> [ 8551.378788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967261
> [ 8551.388272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967260
> [ 8551.397756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967259
> [ 8599.114364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967265
> [ 8599.114366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967264
> [ 8599.114367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967263
> [ 8599.114368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967262
> [ 8599.114370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967261
> [ 8599.114371] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967260
> [ 8599.114372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967259
> [ 8599.117759] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
> [ 8623.906310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
> [ 8623.909333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967260
> [ 8623.909335] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967259
> [ 8623.909624] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
> [ 8623.910846] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
> [ 8623.913364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967262
> [ 8625.066563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967263
> [ 8651.338552] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204280104+8) 4294967267
> [ 8693.930170] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967266
> [ 8693.939647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967265
> [ 8693.949139] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967264
> [ 8693.958637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967263
> [ 8693.968135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967262
> [ 8693.977622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967261
> [ 8693.987106] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967260
> [ 8693.996589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967259
> [ 8703.470915] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967260
> [ 8703.470931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967261
> [ 8703.470985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967264
> [ 8703.470977] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967263
> [ 8703.470957] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967262
> [ 8703.471000] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967265
> [ 8703.471037] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967266
> [ 8771.557361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967265
> [ 8771.566858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967264
> [ 8771.576344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967263
> [ 8771.585830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967262
> [ 8771.595316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967261
> [ 8771.604797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967260
> [ 8771.614273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967259
> [ 8772.929338] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967260
> [ 8772.929454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967261
> [ 8772.929545] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967262
> [ 8772.929764] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967263
> [ 8772.929816] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967264
> [ 8772.929850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967265
> [ 8847.837534] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967263
> [ 8847.837548] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967264
> [ 8847.843801] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967265
> [ 8945.958057] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967260
> [ 8945.958072] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967261
> [ 8945.958101] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967262
> [ 8945.958105] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967263
> [ 8945.958112] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967264
> [ 8945.958137] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967265
> [ 8991.941073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967264
> [ 8991.941075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967263
> [ 8991.941076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967262
> [ 8991.941077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967261
> [ 8991.941078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967260
> [ 8991.941080] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967259
> [ 9036.005328] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967260
> [ 9036.005409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967261
> [ 9036.006275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967262
> [ 9036.006348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967263
> [ 9036.006436] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967264
> [ 9036.006517] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967265
> [ 9089.090686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967264
> [ 9089.100181] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967263
> [ 9089.109670] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967262
> [ 9089.119157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967261
> [ 9089.128642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967260
> [ 9089.138130] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967259
> [ 9120.298531] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967266
> [ 9120.298540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967265
> [ 9120.298547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967264
> [ 9120.298555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967263
> [ 9120.298568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967262
> [ 9120.298581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967261
> [ 9120.298599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967260
> [ 9120.298613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967259
> [ 9120.440293] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967260
> [ 9120.440348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967261
> [ 9120.440387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967262
> [ 9120.440528] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967263
> [ 9120.440553] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967264
> [ 9120.440625] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967265
> [ 9157.076832] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967266
> [ 9204.360610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967265
> [ 9204.370096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967264
> [ 9204.379585] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967263
> [ 9204.389070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967262
> [ 9204.398557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967261
> [ 9204.408047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967260
> [ 9204.417534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967259
> [ 9235.036854] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967260
> [ 9235.036941] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967261
> [ 9235.036985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967262
> [ 9235.037076] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967263
> [ 9235.037103] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967264
> [ 9235.037202] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967265
> [ 9326.601083] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967260
> [ 9326.601219] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967261
> [ 9326.601490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967262
> [ 9326.601519] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967263
> [ 9326.601556] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967264
> [ 9326.601644] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967265
> [ 9326.601736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967266
> [ 9326.607310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967267
> [ 9358.046046] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967266
> [ 9358.055533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967265
> [ 9358.065019] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967264
> [ 9358.074517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967263
> [ 9358.084002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967262
> [ 9358.093495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967261
> [ 9358.102997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967260
> [ 9358.112485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967259
> [ 9361.800927] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967260
> [ 9361.801104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967261
> [ 9361.801282] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967262
> [ 9361.801484] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967263
> [ 9361.801527] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967264
> [ 9361.801620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967265
> [ 9361.801653] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967266
> [ 9430.438322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967265
> [ 9430.447809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967264
> [ 9430.457294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967263
> [ 9430.466782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967262
> [ 9430.476277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967261
> [ 9430.485766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967260
> [ 9430.495250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967259
> [ 9444.781947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967260
> [ 9444.781963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967261
> [ 9444.782418] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967262
> [ 9444.782605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967263
> [ 9444.782662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967264
> [ 9444.782718] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967265
> [ 9444.782857] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967266
> [ 9444.782963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967267
> [ 9460.949714] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967266
> [ 9461.002252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967265
> [ 9461.054806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967264
> [ 9461.107370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967263
> [ 9461.159913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967262
> [ 9461.212473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967261
> [ 9461.265014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967260
> [ 9461.317558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967259
> [ 9472.880989] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967260
> [ 9472.881022] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967262
> [ 9472.881013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967261
> [ 9472.881466] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967263
> [ 9472.881585] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967264
> [ 9472.881617] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967265
> [ 9472.881636] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967266
> [ 9473.230016] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967267
> [ 9484.525992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967266
> [ 9484.607866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967265
> [ 9484.690643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967264
> [ 9484.773393] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967263
> [ 9484.856144] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967262
> [ 9484.938897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967261
> [ 9485.021665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967260
> [ 9485.104419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967259
> [ 9565.878800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967265
> [ 9565.888283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967264
> [ 9565.897767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967263
> [ 9565.907250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967262
> [ 9565.916734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967261
> [ 9565.926219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967260
> [ 9565.935703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967259
> [ 9569.109038] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032745832+8) 4294967260
> [ 9613.943060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032745832+8) 4294967259
> [ 9625.083909] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967264
> [ 9625.083911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967263
> [ 9625.083913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967262
> [ 9625.083914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967261
> [ 9625.083916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967260
> [ 9625.083917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967259
> [ 9686.963155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8599505896+8) 4294967259
> [ 9706.444605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967260
> [ 9706.444608] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967261
> [ 9706.444612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967262
> [ 9706.444615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967263
> [ 9706.444671] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967264
> [ 9706.462013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967265
> [ 9709.460551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967266
> [ 9713.748452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967266
> [ 9713.748682] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967267
> [ 9713.753732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967266
> [ 9713.753735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967265
> [ 9713.753736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967264
> [ 9713.753737] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967263
> [ 9713.753739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967262
> [ 9713.753740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967261
> [ 9713.753741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967260
> [ 9713.753743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967259
> [ 9742.956450] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967260
> [ 9742.956502] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967261
> [ 9742.956503] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967262
> [ 9742.956551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967263
> [ 9743.152042] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967264
> [ 9756.828598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967265
> [ 9811.500567] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967264
> [ 9811.510052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967263
> [ 9811.519534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967262
> [ 9811.529017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967261
> [ 9811.538503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967260
> [ 9811.547984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967259
> [ 9816.207521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967266
> [ 9816.207576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967265
> [ 9816.207628] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967264
> [ 9816.207682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967263
> [ 9816.207747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967262
> [ 9816.207801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967261
> [ 9816.207859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967260
> [ 9816.207914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967259
> [ 9816.211655] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967260
> [ 9816.211658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967261
> [ 9816.211787] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967262
> [ 9816.211882] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967263
> [ 9816.211901] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967264
> [ 9816.211947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967265
> [ 9919.228920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967264
> [ 9919.238395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967263
> [ 9919.247877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967262
> [ 9919.257360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967261
> [ 9919.266845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967260
> [ 9919.276332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967259
> [ 9921.581043] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967261
> [ 9921.580965] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967260
> [ 9921.581099] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967262
> [ 9921.581185] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967263
> [ 9921.581304] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967264
> [ 9921.581341] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967265
> [ 9921.581364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967266
> [ 9921.581365] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967267
> [ 9921.583872] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967266
> [ 9921.583882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967265
> [ 9921.583897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967264
> [ 9921.583913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967263
> [ 9921.583929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967262
> [ 9921.583945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967261
> [ 9921.583960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967260
> [ 9921.583977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967259
> [ 9951.448268] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314088+8) 4294967260
> [ 9986.682431] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967262
> [ 9986.682540] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967263
> [ 9986.682661] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967264
> [ 9986.687019] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967265
> [10026.057977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967264
> [10026.057980] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967263
> [10026.057982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967262
> [10026.057984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967261
> [10026.057986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967260
> [10026.057987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967259
> [10026.060967] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967260
> [10026.061269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967261
> [10026.061642] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967262
> [10026.061728] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967264
> [10026.061715] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967263
> [10026.061745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967265
> [10026.061790] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967266
> [10026.061809] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967267
> [10055.918459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967266
> [10056.153717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967265
> [10056.388985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967264
> [10056.624244] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967263
> [10056.859507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967262
> [10057.094768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967261
> [10057.330043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967260
> [10057.565339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967259
> [10060.361434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967260
> [10060.361487] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967261
> [10060.361634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967262
> [10060.361709] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967263
> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967264
> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967265
> [10060.361723] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967266
> [10108.302805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967265
> [10108.302808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967264
> [10108.302810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967263
> [10108.302812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967262
> [10108.302814] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967261
> [10108.302816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967260
> [10108.302818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967259
> [10108.307331] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967260
> [10108.307515] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967261
> [10108.307665] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967263
> [10108.307647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967262
> [10108.308273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967264
> [10108.308454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967266
> [10108.308426] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967265
> [10108.308720] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967267
> [10138.300402] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967266
> [10138.516650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967265
> [10138.732928] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967264
> [10138.949166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967263
> [10139.165431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967262
> [10139.381678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967261
> [10139.597964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967260
> [10139.814237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967259
> [10144.088915] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746728+8) 4294967260
> [10188.320954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746728+8) 4294967259
> [10196.658979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967260
> [10196.659207] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967261
> [10196.659343] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967262
> [10196.659430] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967263
> [10196.659439] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967264
> [10196.660163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967265
> [10196.660183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967266
> [10279.470741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967265
> [10279.480231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967264
> [10279.489723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967263
> [10279.499214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967262
> [10279.508709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967261
> [10279.518206] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967260
> [10279.527693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967259
> [10286.896922] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746984+8) 4294967260
> [10304.464308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746984+8) 4294967259
> [10305.667186] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967260
> [10305.667269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967261
> [10305.667270] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967262
> [10305.667429] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967263
> [10305.667512] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967264
> [10305.667590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967265
> [10305.667639] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967266
> [10334.751820] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
> [10415.904902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
> [10415.960311] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
> [10423.280595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
> [10439.777461] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967263
> [10439.777592] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967264
> [10439.777647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967265
> [10439.777786] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967266
> [10497.698903] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967265
> [10497.698905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967264
> [10497.698907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967263
> [10497.698908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967262
> [10497.698910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967261
> [10497.698911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967260
> [10497.698912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967259
> [10497.701113] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967260
> [10497.701118] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967261
> [10497.701183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967262
> [10497.701490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967263
> [10497.701908] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967264
> [10497.702132] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967265
> [10593.723273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967264
> [10593.723280] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967265
> [10593.723381] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967266
> [10681.179411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967265
> [10681.188893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967264
> [10681.198368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967263
> [10681.207851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967262
> [10681.217350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967261
> [10681.226842] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967260
> [10681.236340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967259
> [10689.151359] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967260
> [10689.151633] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967261
> [10689.151649] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967262
> [10689.151700] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967263
> [10689.151823] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967264
> [10689.152267] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967265
> [10689.152370] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967266
> [10765.465443] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967260
> [10765.465539] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967261
> [10765.465942] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967262
> [10765.466073] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967263
> [10765.471339] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967264
> [10765.471352] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967265
> [10765.471615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967266
> [10810.806942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967265
> [10810.816430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967264
> [10810.825920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967263
> [10810.835410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967262
> [10810.844892] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967261
> [10810.854379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967260
> [10810.863866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967259
> [10829.079850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741313832+8) 4294967260
> [10849.794350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
> [10918.134642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967264
> [10918.144122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967263
> [10918.153605] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967262
> [10918.163087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967261
> [10918.172569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967260
> [10918.182053] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
> [10925.672507] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967260
> [10925.672673] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967262
> [10925.672533] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967261
> [10925.672788] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967263
> [10925.672958] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967264
> [10925.672978] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967265
> [10925.673524] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967266
> [10925.673721] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967267
> [10938.064956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967266
> [10938.263167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967265
> [10938.461362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967264
> [10938.659537] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967263
> [10938.857718] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967262
> [10939.055907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967261
> [10939.254110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967260
> [10939.452289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967259
> [10942.625740] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967260
> [10942.625791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967262
> [10942.625850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967264
> [10942.625849] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967263
> [10942.625789] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967261
> [10942.626403] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967265
> [11020.726643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967264
> [11020.726645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967263
> [11020.726646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967262
> [11020.726648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967261
> [11020.726649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967260
> [11020.726651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967259
> [11045.762697] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967262
> [11045.763802] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967263
> [11045.763966] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967264
> [11045.764660] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967265
> [11072.427987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967266
> [11072.429693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967265
> [11072.429971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967264
> [11072.430020] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967263
> [11072.430076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967262
> [11072.430127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967261
> [11072.430180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967260
> [11072.430234] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967259
> [11103.176662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967260
> [11103.176760] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967261
> [11103.176914] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967262
> [11103.176947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967263
> [11103.177351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967264
> [11103.243210] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967265
> [11197.368568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967264
> [11197.378067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967263
> [11197.387571] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967262
> [11197.397061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967261
> [11197.406551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967260
> [11197.416027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967259
> [11213.302960] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741352808+8) 4294967260
> [11214.005987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967265
> [11214.005989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967264
> [11214.005990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967263
> [11214.005991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967262
> [11214.005992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967261
> [11214.005994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967260
> [11214.005995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967259
> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967260
> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967261
> [11251.017233] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967263
> [11251.017238] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967266
> [11251.017236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967265
> [11251.017234] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967264
> [11251.017231] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967262
> [11269.980634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967267
> [11289.588526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967266
> [11289.598008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967265
> [11289.607492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967264
> [11289.616977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967263
> [11289.626462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967262
> [11289.635951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967261
> [11289.645439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967260
> [11289.654929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967259
> [11289.955406] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967260
> [11289.955748] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967261
> [11289.955828] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967262
> [11289.956014] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967263
> [11310.687017] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967264
> [11344.753308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967266
> [11344.753344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967265
> [11344.753381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967264
> [11344.753417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967263
> [11344.753453] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967262
> [11344.753488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967261
> [11344.753524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967260
> [11344.753559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967259
> [11372.057310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967260
> [11372.057453] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967261
> [11372.058126] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967262
> [11372.058225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967263
> [11372.058236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967264
> [11372.058445] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967265
> [11477.692568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967264
> [11477.702057] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967263
> [11477.711549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967262
> [11477.721041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967261
> [11477.730527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967260
> [11477.740015] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967259
> [11484.738964] __add_stripe_bio: md127: start ff2721beec8c2fa0(28725142504+8) 4294967260
> [11516.590602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28725142504+8) 4294967259
> [11541.514580] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967260
> [11541.514657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967261
> [11541.514736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967262
> [11541.514795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967263
> [11541.514937] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967264
> [11541.514959] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967265
> [11541.515020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967266
> [11541.515178] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967267
> [11589.245255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967266
> [11589.530527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967265
> [11589.815730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967264
> [11590.100882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967263
> [11590.386071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967262
> [11590.671228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967261
> [11590.956368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967260
> [11591.241504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967259
> [11665.205805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967264
> [11665.215300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967263
> [11665.224797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967262
> [11665.234283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967261
> [11665.243770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967260
> [11665.253265] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967259
> [11676.491376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967260
> [11676.491588] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967261
> [11676.491637] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967262
> [11676.491798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967263
> [11676.492010] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967264
> [11785.703215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967260
> [11787.558791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967261
> [11787.558892] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967262
> [11791.614199] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967263
> [11793.388452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967264
> [11795.421600] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967265
> [11795.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967266
> [11795.424300] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967267
> [11795.426502] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967266
> [11795.426503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967265
> [11795.426504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967264
> [11795.426505] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967263
> [11795.426506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967262
> [11795.426507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967261
> [11795.426509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967260
> [11795.426510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967259
> [11871.476819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967260
> [11871.486321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967259
> [11919.615037] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967260
> [11919.615114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967261
> [11919.615374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967262
> [11919.615395] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967263
> [11919.615491] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967264
> [11928.774840] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967267
> [12000.001506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967264
> [12000.001507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967263
> [12000.001509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967262
> [12000.001510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967261
> [12000.001512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967260
> [12000.001514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967259
> [12033.086368] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967260
> [12033.086440] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967261
> [12033.086702] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967262
> [12033.086790] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967263
> [12033.087023] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967264
> [12071.801701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967262
> [12071.801733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967264
> [12071.801727] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967263
> [12071.801859] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967265
> [12071.801934] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967266
> [12147.838099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967265
> [12147.838104] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967264
> [12147.838108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967263
> [12147.838113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967262
> [12147.838117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967261
> [12147.838122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967260
> [12147.838131] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967259
> [12161.825218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967260
> [12171.278213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967261
> [12171.278308] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967262
> [12171.278349] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967263
> [12171.278418] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967264
> [12171.278481] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967265
> [12225.694791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967264
> [12225.704279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967263
> [12225.713774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967262
> [12225.723264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967261
> [12225.732759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967260
> [12225.742248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967259
> [12241.217982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967260
> [12241.218022] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967261
> [12241.218156] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967262
> [12241.218291] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967263
> [12241.218712] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967264
> [12241.225590] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967265
> [12324.241931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967264
> [12324.251421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967263
> [12324.260905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967262
> [12324.270390] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967261
> [12324.279874] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967260
> [12324.289362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967259
> [12330.627283] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967261
> [12330.627282] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967260
> [12330.627356] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967262
> [12330.627452] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967263
> [12330.627459] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967264
> [12330.627476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967265
> [12360.643250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967265
> [12360.643251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967264
> [12360.643253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967263
> [12360.643254] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967262
> [12360.643256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967261
> [12360.643257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967260
> [12360.643258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967259
> [12412.055085] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967262
> [12412.055185] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967263
> [12412.055346] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967264
> [12412.055358] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967265
> [12502.916463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967264
> [12502.925955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967263
> [12502.935437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967262
> [12502.944921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967261
> [12502.954403] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967260
> [12502.963891] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967259
> [12508.172163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28994103208+8) 4294967260
> [12569.394168] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28994103208+8) 4294967259
> [12669.806358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967264
> [12669.815852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967263
> [12669.825349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967262
> [12669.834848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967261
> [12669.844337] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967260
> [12669.853824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967259
> [12681.132403] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967260
> [12681.132623] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967261
> [12681.132903] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967262
> [12681.133118] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967263
> [12681.133360] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967264
> [12681.133472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967265
> [12681.133687] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967266
> [12756.131024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967265
> [12756.140520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967264
> [12756.150010] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967263
> [12756.159496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967262
> [12756.168984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967261
> [12756.178470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967260
> [12756.187958] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967259
> [12761.752380] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967260
> [12761.752555] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967261
> [12761.752566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967262
> [12761.752717] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967263
> [12761.752864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967264
> [12761.753575] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967265
> [12841.108192] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967264
> [12841.117686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967263
> [12841.127177] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967262
> [12841.136667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967261
> [12841.146150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967260
> [12841.155642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967259
> [12854.788520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967262
> [12854.789006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967263
> [12854.790480] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967264
> [12854.792345] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967265
> [12854.792371] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967266
> [12854.792648] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967267
> [12854.796137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967266
> [12854.796140] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967265
> [12854.796143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967264
> [12854.796145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967263
> [12854.796147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967262
> [12854.796149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967261
> [12854.796151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967260
> [12854.796152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967259
> [12979.496382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967265
> [12979.505867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967264
> [12979.515357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967263
> [12979.524845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967262
> [12979.534338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967261
> [12979.543825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967260
> [12979.553315] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967259
> [12987.839356] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032747304+8) 4294967260
> [13019.716541] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032747304+8) 4294967259
> [13023.790667] __add_stripe_bio: md127: start ff2721beec8c2fa0(8053065000+8) 4294967260
> [13166.159630] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8053065000+8) 4294967259
> [13172.153701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967260
> [13172.153856] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967261
> [13172.154183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967262
> [13172.154307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967263
> [13172.154320] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967264
> [13172.154325] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967265
> [13172.154327] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967266
> [13172.154540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967267
> [13184.154929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967266
> [13184.395309] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967265
> [13184.635656] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967264
> [13184.876002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967263
> [13185.116361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967262
> [13185.356722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967261
> [13185.597099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967260
> [13185.837445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967259
> [13200.462739] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967260
> [13200.463373] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967261
> [13200.463433] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967262
> [13200.463686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967263
> [13200.463718] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967264
> [13200.463750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967265
> [13200.463801] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967266
> [13200.463824] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967267
> [13218.832166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967266
> [13218.964853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967265
> [13219.097514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967264
> [13219.230197] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967263
> [13219.362875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967262
> [13219.495536] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967261
> [13219.628221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967260
> [13219.760852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967259
> [13284.342646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967264
> [13284.352133] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967263
> [13284.361618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967262
> [13284.371115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967261
> [13284.380606] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967260
> [13284.390092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967259
> [13290.205839] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312350184+8) 4294967260
> [13328.994695] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312350184+8) 4294967259
> [13358.709396] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967261
> [13358.709379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967260
> [13358.709414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967262
> [13358.709435] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967263
> [13358.709475] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967264
> [13362.737444] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967265
> [13362.737666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967266
> [13386.563843] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967260
> [13386.563968] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967261
> [13386.564092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967262
> [13386.564227] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967263
> [13386.564297] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967264
> [13386.564364] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967265
> [13386.564532] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967266
> [13425.701678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967265
> [13425.701680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967264
> [13425.701681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967263
> [13425.701682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967262
> [13425.701684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967261
> [13425.701686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967260
> [13425.701688] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967259
> [13491.009481] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967264
> [13491.018974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967263
> [13491.028463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967262
> [13491.037945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967261
> [13491.047430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967260
> [13491.056918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967259
> [13493.360976] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967260
> [13493.361476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967261
> [13493.361592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967262
> [13493.366880] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967263
> [13493.367183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967264
> [13493.367399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967265
> [13493.367635] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967266
> [13493.367706] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967267
> [13524.736988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967266
> [13524.746476] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967265
> [13524.755962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967264
> [13524.765450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967263
> [13524.774943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967262
> [13524.784436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967261
> [13524.793931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967260
> [13524.803422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967259
> [13529.444566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967260
> [13529.444628] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967261
> [13529.445249] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967262
> [13529.445330] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967264
> [13529.445307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967263
> [13529.445578] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967265
> [13529.445594] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967266
> [13568.836756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967265
> [13568.836757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967264
> [13568.836759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967263
> [13568.836766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967262
> [13568.836768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967261
> [13568.836769] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967260
> [13568.836770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967259
> [13595.486041] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967260
> [13595.486114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967261
> [13595.486450] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967262
> [13595.486544] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967263
> [13595.486756] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967264
> [13595.486807] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967265
> [13595.487127] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967266
> [13684.444417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967265
> [13684.453904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967264
> [13684.463391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967263
> [13684.472878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967262
> [13684.482361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967261
> [13684.491850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967260
> [13684.501340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967259
> [13686.643818] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967260
> [13686.643853] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967261
> [13686.643948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967262
> [13686.643993] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967263
> [13686.644144] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967264
> [13686.644406] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967265
> [13686.644525] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967266
> [13734.793297] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967265
> [13734.809137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967264
> [13744.445102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967263
> [13744.454593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967262
> [13744.464083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967261
> [13744.473560] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967260
> [13744.483051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967259
> [13820.332081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967264
> [13820.341572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967263
> [13820.351058] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967262
> [13820.360550] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967261
> [13820.370039] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967260
> [13820.379525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967259
> [13828.887574] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967260
> [13828.888698] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967261
> [13828.888811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967262
> [13828.888838] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967263
> [13828.888877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967264
> [13828.889012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967265
> [13828.889087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967266
> [13939.263249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967265
> [13939.272744] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967264
> [13939.282233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967263
> [13939.291721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967262
> [13939.301203] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967261
> [13939.310687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967260
> [13939.320173] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967259
> [13949.622362] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032748904+8) 4294967260
> [13975.927477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324672360+8) 4294967267
> [13975.932834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967266
> [13975.932837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967265
> [13975.932840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967264
> [13975.932843] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967263
> [13975.932845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967262
> [13975.932848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967261
> [13975.932851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967260
> [13975.932853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967259
> [14002.581980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967260
> [14002.582096] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967261
> [14002.582385] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967262
> [14002.582558] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967263
> [14002.582619] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967264
> [14002.582667] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967265
> [14119.130023] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967264
> [14119.139513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967263
> [14119.148996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967262
> [14119.158486] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967261
> [14119.167972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967260
> [14119.177456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967259
> [14124.072930] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967260
> [14124.073027] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967262
> [14124.073025] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967261
> [14124.073126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967263
> [14124.073129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967264
> [14124.073379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967265
> [14210.467642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967264
> [14210.467644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967263
> [14210.467645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967262
> [14210.467647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967261
> [14210.467650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967260
> [14210.467652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967259
> [14239.699953] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312349928+8) 4294967260
> [14343.004904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312349928+8) 4294967259
> [14351.607882] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967260
> [14351.607891] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967262
> [14351.607979] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967263
> [14351.607884] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967261
> [14351.608559] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967264
> [14351.608811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967265
> [14351.691948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967266
> [14460.596492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967265
> [14460.596493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967264
> [14460.596494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967263
> [14460.596495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967262
> [14460.596496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967261
> [14460.596497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967260
> [14460.596499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967259
> [14460.598694] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967260
> [14460.598948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967261
> [14460.599105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967262
> [14460.599222] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967263
> [14460.599261] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967264
> [14460.599376] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967265
> [14541.632593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967264
> [14541.642093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967263
> [14541.651581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967262
> [14541.661069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967261
> [14541.670554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967260
> [14541.680042] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967259
> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967260
> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967261
> [14547.547058] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967262
> [14547.547167] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967263
> [14547.547468] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967264
> [14547.547552] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967265
> [14547.547661] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967266
> [14547.547697] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967267
> [14575.315268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967266
> [14575.596007] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967265
> [14575.876716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967264
> [14576.157450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967263
> [14576.438196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967262
> [14576.718943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967261
> [14576.999682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967260
> [14577.280400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967259
> [14582.737304] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749480+8) 4294967260
> [14636.539506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749480+8) 4294967259
> [14638.880107] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749864+8) 4294967260
> [14657.493830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749864+8) 4294967259
> [14675.212921] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967260
> [14675.213033] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967261
> [14675.213091] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967262
> [14675.213105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967263
> [14675.213429] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967264
> [14675.213477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967265
> [14675.213877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967266
> [14737.052888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967265
> [14737.062372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967264
> [14737.071858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967263
> [14737.081341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967262
> [14737.090827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967261
> [14737.100311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967260
> [14737.109810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967259
> [14743.382147] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967260
> [14743.382495] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967261
> [14743.382520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967262
> [14743.382595] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967263
> [14743.382607] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967264
> [14743.382646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967265
> [14743.382683] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967266
> [14836.115483] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967265
> [14836.124971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967264
> [14836.134457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967263
> [14836.143943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967262
> [14836.153427] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967261
> [14836.162908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967260
> [14836.172392] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967259
> [14867.977410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967265
> [14867.977411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967264
> [14867.977413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967263
> [14867.977414] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967262
> [14867.977416] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967261
> [14867.977418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967260
> [14867.977419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967259
> [14867.982046] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967260
> [14867.982289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967261
> [14867.982319] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967262
> [14867.982377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967263
> [14867.982398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967264
> [14867.982409] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967265
> [14906.532978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967264
> [14906.532983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967263
> [14906.532987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967262
> [14906.532991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967261
> [14906.532995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967260
> [14906.533001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967259
> [14906.537331] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967260
> [14906.537374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967261
> [14906.537389] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967262
> [14906.537397] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967263
> [14906.537413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967264
> [14906.537417] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967265
> [14906.538256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967266
> [15000.553174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967265
> [15000.562663] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967264
> [15000.572145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967263
> [15000.581619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967262
> [15000.591109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967261
> [15000.600591] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967260
> [15000.610079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967259
> [15038.840534] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967265
> [15038.840569] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967266
> [15038.918437] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967267
> [15072.110477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967266
> [15072.119963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967265
> [15072.129445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967264
> [15072.138931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967263
> [15072.148422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967262
> [15072.157919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967261
> [15072.167410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967260
> [15072.176895] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967259
> [15094.337077] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967260
> [15094.337106] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967262
> [15094.337126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967264
> [15094.337145] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967265
> [15094.337125] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967263
> [15094.337095] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967261
> [15094.337202] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967266
> [15094.337864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967267
> [15109.185642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967266
> [15109.231325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967265
> [15109.276996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967264
> [15109.322684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967263
> [15109.368357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967262
> [15109.414059] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967261
> [15109.459734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967260
> [15109.505426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967259
> [15109.940853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967266
> [15109.940854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967265
> [15109.940856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967264
> [15109.940857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967263
> [15109.940858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967262
> [15109.940860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967261
> [15109.940862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967260
> [15109.940863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967259
> [15109.943869] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032750824+8) 4294967260
> [15109.946364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032750824+8) 4294967259
> [15143.938195] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967260
> [15143.938199] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967261
> [15143.938502] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967262
> [15143.938860] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967263
> [15143.939079] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967265
> [15143.939055] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967264
> [15194.321503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967264
> [15194.330986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967263
> [15194.340472] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967262
> [15194.349954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967261
> [15194.359431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967260
> [15194.368920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967259
> [15202.981864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967260
> [15202.982006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967262
> [15202.982049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967263
> [15202.981938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967261
> [15202.982750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967264
> [15284.694932] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967263
> [15284.704421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967262
> [15284.713913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967261
> [15284.723400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967260
> [15284.732888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967259
> [15293.005593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967264
> [15293.005596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967263
> [15293.005597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967262
> [15293.005599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967261
> [15293.005602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967260
> [15293.005603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967259
> [15293.008211] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032751272+8) 4294967260
> [15293.009308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032751272+8) 4294967259
> [15361.123918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967265
> [15361.133405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967264
> [15361.142894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967263
> [15361.152384] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967262
> [15361.161896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967261
> [15361.171380] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967260
> [15361.180864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967259
> [15364.021538] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967260
> [15364.021666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967261
> [15364.022045] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967262
> [15364.022114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967264
> [15364.022092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967263
> [15364.022137] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967265
> [15364.022367] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967266
> [15364.022522] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967267
> [15374.355722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967266
> [15374.554797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967265
> [15374.753840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967264
> [15374.952890] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967263
> [15375.151938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967262
> [15375.351041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967261
> [15375.550089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967260
> [15375.749178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967259
> [15475.090773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967263
> [15475.090778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967262
> [15475.090782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967261
> [15475.090790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967260
> [15475.090795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967259
> [15512.462087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530332008+8) 4294967260
> [15605.182723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29530332008+8) 4294967259
> [15610.777819] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967260
> [15610.778112] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967261
> [15610.778158] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967262
> [15610.778540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967263
> [15610.778741] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967264
> [15610.778768] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967265
> [15610.779126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967266
> [15610.779136] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967267
> [15625.092720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967266
> [15625.291753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967265
> [15625.490794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967264
> [15625.689869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967263
> [15625.888922] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967262
> [15626.087961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967261
> [15626.287002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967260
> [15626.486060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967259
> [15631.421893] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967260
> [15631.422458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967261
> [15631.422563] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967262
> [15631.422804] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967263
> [15631.422822] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967264
> [15631.422885] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967265
> [15649.575279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967266
> [15684.592107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967265
> [15684.601597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967264
> [15684.611088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967263
> [15684.620575] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967262
> [15684.630060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967261
> [15684.639545] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967260
> [15684.649027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967259
> [15690.850975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967260
> [15690.851550] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967261
> [15690.851939] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967262
> [15690.852053] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967263
> [15690.852178] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967264
> [15690.852281] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967265
> [15690.852313] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967266
> [15735.620596] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967260
> [15735.620646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967261
> [15735.620685] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967263
> [15735.620686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967264
> [15735.620652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967262
> [15735.621500] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967265
> [15816.149770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967264
> [15816.149772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967263
> [15816.149774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967262
> [15816.149776] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967261
> [15816.149778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967260
> [15816.149780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967259
> [15844.779214] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967260
> [15844.779218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967261
> [15934.023215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967265
> [15934.032699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967264
> [15934.042184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967263
> [15934.051668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967262
> [15934.061153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967261
> [15934.070639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967260
> [15934.080128] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967259
> [15935.505198] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967260
> [15935.505344] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967261
> [15935.505684] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967262
> [15951.816975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967263
> [15951.817541] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967264
> [15951.817733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967265
> [16048.370516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967264
> [16048.370517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967263
> [16048.370518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967262
> [16048.370520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967261
> [16048.370521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967260
> [16048.370522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967259
> [16048.372719] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967260
> [16048.373083] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967261
> [16048.373765] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967262
> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967263
> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967264
> [16048.373936] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967265
> [16048.373938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967266
> [16048.373966] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967267
> [16099.152109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967266
> [16099.438210] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967265
> [16099.724249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967264
> [16100.011268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967263
> [16100.298239] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967262
> [16100.585188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967261
> [16100.872155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967260
> [16101.159163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967259
> [16105.965912] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967260
> [16105.966207] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967261
> [16105.966279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967262
> [16105.966399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967263
> [16105.966643] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967264
> [16105.966725] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967265
> [16163.037754] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967260
> [16163.037836] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967261
> [16163.037883] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967262
> [16163.038018] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967263
> [16163.038537] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967264
> [16163.039472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967265
> [16163.263169] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967266
> [16264.133495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967265
> [16264.142981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967264
> [16264.152464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967263
> [16264.161951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967262
> [16264.171441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967261
> [16264.180935] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967260
> [16264.190424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967259
> [16266.992432] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967260
> [16266.992711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967261
> [16266.993213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967262
> [16266.993398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967263
> [16266.993458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967264
> [16290.695547] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967265
> [16290.695556] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967266
> [16379.808266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967265
> [16379.817751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967264
> [16379.827236] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967263
> [16379.836720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967262
> [16379.846205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967261
> [16379.855697] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967260
> [16379.865187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967259
> [16387.725516] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967260
> [16387.725798] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967261
> [16387.725917] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967262
> [16387.726352] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967263
> [16387.726414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967264
> [16387.726707] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967265
> [16408.022981] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967266
> [16505.753964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967265
> [16505.763456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967264
> [16505.772952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967263
> [16505.782444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967262
> [16505.791921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967261
> [16505.801405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967260
> [16505.810889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967259
> [16515.475620] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967261
> [16515.475613] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967260
> [16515.475711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967262
> [16515.475844] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967263
> [16515.475958] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967264
> [16515.476377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967265
> [16515.476744] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967266
> [16534.160611] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967267
> [16554.448056] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967266
> [16554.457546] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967265
> [16554.467022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967264
> [16554.476504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967263
> [16554.485979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967262
> [16554.495463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967261
> [16554.504953] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967260
> [16554.514442] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967259
> [16555.835592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967260
> [16555.835847] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967261
> [16555.836140] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967262
> [16555.836279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967263
> [16576.074289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967264
> [16576.074652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967265
> [16576.075049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967266
> [16702.114836] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967265
> [16702.124323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967264
> [16702.133808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967263
> [16702.143300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967262
> [16702.152799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967261
> [16702.162289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967260
> [16702.171773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967259
> [16710.388044] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967260
> [16710.388157] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967261
> [16710.388256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967262
> [16710.388347] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967264
> [16710.388390] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967265
> [16710.388410] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967266
> [16710.388285] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967263
> [16710.389466] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967267
> [16726.681690] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967266
> [16732.076989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967265
> [16732.227720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967264
> [16732.378463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967263
> [16732.529207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967262
> [16732.679930] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967261
> [16732.830681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967260
> [16732.981424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967259
> [16739.691982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967261
> [16739.691980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967260
> [16739.692049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967262
> [16739.692753] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967263
> [16739.693143] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967264
> [16739.693286] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967265
> [16739.693391] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967266
> [16796.194339] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967260
> [16796.194398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967261
> [16796.194422] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967262
> [16796.194483] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967263
> [16796.194946] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967264
> [16796.195038] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967265
> [16796.195499] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967266
> [16870.648462] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753896+8) 4294967260
> [16898.893981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753896+8) 4294967259
> [16967.311945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967265
> [16967.321437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967264
> [16967.330924] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967263
> [16967.340412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967262
> [16967.349896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967261
> [16967.359379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967260
> [16967.368867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967259
> [16976.071394] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967260
> [16976.071413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967261
> [16976.071469] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967262
> [16976.071568] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967263
> [16976.071677] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967264
> [16976.071732] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967266
> [16976.071700] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967265
> [16976.072068] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967267
> [16990.311360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967266
> [16990.481108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967265
> [16990.650832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967264
> [16990.820572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967263
> [16990.990305] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967262
> [16991.160043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967261
> [16991.329786] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967260
> [16991.499551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967259
> [16996.987839] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754024+8) 4294967260
> [17047.151615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754024+8) 4294967259
> [17115.879905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967264
> [17115.889391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967263
> [17115.898868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967262
> [17115.908352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967261
> [17115.917837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967260
> [17115.927320] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967259
> [17116.400129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967260
> [17116.400218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967261
> [17116.400665] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967262
> [17116.400972] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967263
> [17116.401012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967264
> [17116.401073] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967265
> [17116.401094] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967266
> [17173.631876] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967265
> [17173.631877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967264
> [17173.631878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967263
> [17173.631880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967262
> [17173.631881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967261
> [17173.631883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967260
> [17173.631885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967259
> [17201.424309] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753960+8) 4294967260
> [17209.574673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753960+8) 4294967259
> [17212.610080] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967260
> [17212.610550] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967261
> [17226.574613] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967262
> [17226.574818] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967263
> [17226.575156] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967264
> [17226.575266] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967265
> [17226.575729] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967266
> [17296.342998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967265
> [17296.352083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967264
> [17296.361178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967263
> [17296.370271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967262
> [17296.379366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967261
> [17296.388464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967260
> [17296.397557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967259
> [17297.711781] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967260
> [17297.712071] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967261
> [17311.049521] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967262
> [17311.049595] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967263
> [17391.178025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967262
> [17391.187127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967261
> [17391.196230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967260
> [17391.205325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967259
> [17397.357339] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967260
> [17397.358064] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967261
> [17397.358191] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967262
> [17406.112100] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967263
> [17460.063716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967262
> [17460.072623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967261
> [17460.081520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967260
> [17460.090422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967259
> [17464.959257] __add_stripe_bio: md127: start ff2721beec8c2fa0(75624+8) 4294967260
> [17482.594510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(75624+8) 4294967259
> [17508.091641] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967261
> [17508.091640] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967260
> [17508.091647] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967262
> [17532.456256] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967260
> [17532.456294] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967261
> [17532.456309] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967262
> [17572.776358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967261
> [17572.785257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967260
> [17572.794163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967259
> [17594.427109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(12348140520+8) 4294967259
> [17631.571482] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967262
> [17633.896087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967261
> [17633.904990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967260
> [17633.913889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967259
> [17640.670153] __add_stripe_bio: md127: start ff2721beec8c2fa0(42344+8) 4294967262
> [17661.740739] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967264
> [17661.740869] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967265
> [17691.866848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967264
> [17691.866850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967263
> [17691.866851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967262
> [17691.866853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967261
> [17691.866854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967260
> [17691.866855] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967259
> [17711.055783] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967260
> [17711.055850] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967262
> [17711.055807] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967261
> [17753.659012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967261
> [17753.667920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967260
> [17753.676822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967259
> [17756.839874] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754472+8) 4294967260
> [17761.904589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754472+8) 4294967259
> [17764.608952] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967260
> [17764.609156] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967262
> [17764.609117] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967261
> [17764.609992] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967263
> [17785.372101] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967264
> [17785.480370] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967265
> [17831.956995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967264
> [17831.965897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967263
> [17831.974795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967262
> [17831.983692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967261
> [17831.992592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967260
> [17832.001495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967259
> [17834.344591] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755304+8) 4294967260
> [17843.122828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032755304+8) 4294967259
> [17845.582553] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967260
> [17845.582666] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967261
> [17845.583154] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967262
> [17845.583190] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967264
> [17845.583179] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967263
> [17895.265867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967263
> [17895.274772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967262
> [17895.283673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967261
> [17895.292578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967260
> [17895.301470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967259
> [17898.047030] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967260
> [17898.048282] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967261
> [17898.049252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967260
> [17898.049253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967259
> [17910.857571] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967260
> [17910.857605] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967261
> [17940.805214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967260
> [17940.805216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967259
> [17953.692889] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967261
> [17953.692929] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
> [17953.693143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
> [17953.694264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
> [18003.530258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
> [18003.539162] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967260
> [18003.548066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967259
> [18009.481434] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26216+8) 4294967259
> [18009.492293] __add_stripe_bio: md127: start ff2721beec8c2fa0(25704+8) 4294967260
> [18048.069279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25704+8) 4294967259
> [18048.552558] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967260
> [18048.552796] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967261
> [18048.552825] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967262
> [18048.554933] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967263
> [18081.808216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967262
> [18081.808217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967261
> [18081.808219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967260
> [18081.808220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967259
> [18081.819956] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967260
> [18081.820706] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967261
> [18110.361331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967260
> [18110.361332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967259
> [18160.565496] __add_stripe_bio: md127: start ff2721beec8c2fa0(10792+8) 4294967263
> [18169.306917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967260
> [18169.306918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967259
> [18169.318095] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967260
> [18169.319212] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967261
> [18169.394456] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967262
> [18211.597621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(61800+8) 4294967259
> [18261.334926] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967260
> [18261.335380] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967261
> [18297.192489] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25501361192+8) 4294967259
> [18332.815982] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967262
> [18332.816950] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967263
> [18332.819467] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967264
> [18332.819799] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967265
> [18332.820819] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967266
> [18363.308810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967265
> [18363.308813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967264
> [18363.308816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967263
> [18363.308819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967262
> [18363.308822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967261
> [18363.308825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967260
> [18363.308828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967259
> [18412.849619] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967262
> [18412.850582] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967263
> [18412.850911] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967264
> [18412.851264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967265
> [18443.565725] __add_stripe_bio: md127: start ff2721beec8c2fa0(28454161000+8) 4294967260
> [18473.028395] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967260
> [18473.029608] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967261
> [18502.557646] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967262
> [18502.557723] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967263
> [18502.558152] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967264
> [18502.558930] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967265
> [18502.559041] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967266
> [18502.563022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967265
> [18502.563024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967264
> [18502.563025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967263
> [18502.563026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967262
> [18502.563027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967261
> [18502.563029] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967260
> [18502.563030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967259
> [18560.133303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331007720+8) 4294967259
> [18589.564077] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967260
> [18589.564089] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967261
> [18589.564670] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967262
> [18589.565137] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967263
> [18589.565700] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967264
> [18589.566003] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967265
> [18638.817896] __add_stripe_bio: md127: start ff2721beec8c2fa0(331165224+8) 4294967260
> [18639.851587] __add_stripe_bio: md127: start ff2721beec8c2fa0(4563405800+8) 4294967260
> [18721.230354] __add_stripe_bio: md127: start ff2721beec8c2fa0(6174129128+8) 4294967260
> [18753.264400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6174129128+8) 4294967259
> [18814.918267] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755240+8) 4294967260
> [18817.035728] __add_stripe_bio: md127: start ff2721beec8c2fa0(331192424+8) 4294967267
> [18817.037803] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967266
> [18817.037809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967265
> [18817.037812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967264
> [18817.037815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967263
> [18817.037818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967262
> [18817.037822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967261
> [18817.037825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967260
> [18817.037827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967259
> [18847.022837] __add_stripe_bio: md127: start ff2721beec8c2fa0(8589935656+8) 4294967260
> [18931.949431] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530321832+8) 4294967260
> [19054.852844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967265
> [19054.852846] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967264
> [19054.852847] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967263
> [19054.852849] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967262
> [19054.852850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967261
> [19054.852852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967260
> [19054.852853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967259
> [19104.480492] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967264
> [19104.480523] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967265
> [19162.254665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967263
> [19162.254666] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967262
> [19162.254668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967261
> [19162.254669] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967260
> [19162.254671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967259
> [19194.644616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967260
> [19194.644618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967259
> [19225.730035] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967260
> [19225.730135] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967261
> [19225.730341] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967262
> [19225.733024] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967263
> [19225.733509] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967264
> [19225.799551] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967265
> [19250.693927] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967266
> [19251.803761] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967260
> [19251.805818] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967261
> [19251.807214] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967262
> [19251.807230] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967263
> [19251.807441] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967264
> [19251.807684] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967265
> [19284.419215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967265
> [19284.419218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967264
> [19284.419221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967263
> [19284.419222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967262
> [19284.419225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967261
> [19284.419228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967260
> [19284.419230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967259
> [19324.123801] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967260
> [19324.124880] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967261
> [19389.626363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967264
> [19389.626366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967263
> [19389.626370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967262
> [19389.626373] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967261
> [19389.626376] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967260
> [19389.626379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967259
> [19411.000068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967264
> [19411.000070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967263
> [19411.000071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967262
> [19411.000073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967261
> [19411.000075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967260
> [19411.000076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967259
> [19442.885291] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967260
> [19442.885494] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967261
> [19442.885496] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967262
> [19442.885575] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967263
> [19500.040964] __add_stripe_bio: md127: start ff2721beec8c2fa0(536935976+8) 4294967260
> [19503.516938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967263
> [19503.516939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967262
> [19503.516941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967261
> [19503.516942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967260
> [19503.516944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967259
> [19531.506729] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967261
> [19531.507247] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967262
> [19531.510481] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967263
> [19559.370264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967262
> [19559.370268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967261
> [19559.370272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967260
> [19559.370275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967259
> [19590.464792] __add_stripe_bio: md127: start ff2721beec8c2fa0(7788215976+8) 4294967260
> [19620.633883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(7788215976+8) 4294967259
> [19650.250748] __add_stripe_bio: md127: start ff2721beec8c2fa0(536913192+8) 4294967260
> [19680.643891] __add_stripe_bio: md127: start ff2721beec8c2fa0(20135746152+8) 4294967260
> [19708.804030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(20135746152+8) 4294967259
> [19737.574540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967260
> [19737.574543] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967259
> [19765.378569] __add_stripe_bio: md127: start ff2721beec8c2fa0(536900904+8) 4294967261
> [19794.831033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967260
> [19821.381894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967259
> [19821.429688] __add_stripe_bio: md127: start ff2721beec8c2fa0(536898024+8) 4294967264
> [19856.960152] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967260
> [19856.964598] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967261
> [19856.967055] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967262
> [19879.048926] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967263
> [19879.048937] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967264
> [19887.395626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967263
> [19887.395631] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967262
> [19887.395634] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967261
> [19887.395637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967260
> [19887.395639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967259
> [19887.406610] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967260
> [19916.087911] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967261
> [19918.951492] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967260
> [19947.259645] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967261
> [19983.717648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967260
> [19983.717650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967259
> [19983.723154] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
> [19983.723284] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967261
> [19983.723330] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967262
> [19983.723447] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967263
> [20015.225720] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967264
> [20015.225737] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967265
> [20015.233248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967264
> [20015.233249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967263
> [20015.233250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967262
> [20015.233251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967261
> [20015.233252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967260
> [20015.233253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967259
> [20039.634420] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
> [20059.881519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967263
> [20059.881521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967262
> [20059.881522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967261
> [20059.881523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967260
> [20059.881524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967259
> [20091.703960] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967260
> [20091.704062] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967261
> [20091.704130] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967262
> [20091.704371] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967263
> [20091.704597] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967264
> [20091.705014] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967265
> [20091.705043] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967266
> [20091.705080] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967267
> [20107.172534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967266
> [20107.416045] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967265
> [20107.659569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967264
> [20107.903083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967263
> [20108.146684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967262
> [20108.390242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967261
> [20108.633740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967260
> [20108.877229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967259
> [20125.925086] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967260
> [20125.925103] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967261
> [20128.394916] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967262
> [20128.583655] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967263
> [20132.751983] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967264
> [20138.332744] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967260
> [20138.333973] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967261
> [20138.334178] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967262
> [20138.335009] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967263
> [20138.335115] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967264
> [20138.335265] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967265
> [20138.338858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967264
> [20138.338860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967263
> [20138.338862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967262
> [20138.338864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967261
> [20138.338866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967260
> [20138.338868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967259
> [20166.832981] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967260
> [20166.833229] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967261
> [20166.834027] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967262
> [20196.134888] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967263
> [20199.500306] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967263
> [20199.500310] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967262
> [20199.500313] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967261
> [20199.500317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967260
> [20199.500321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
> [20199.942600] __add_stripe_bio: md127: start ff2721beec8c2fa0(555373992+8) 4294967260
> [20199.944367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
> [20245.088563] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967260
> [20245.088642] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967261
> [20245.088687] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967262
> [20245.088777] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967263
> [20245.091384] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967264
> [20245.091670] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967265
> [20245.091900] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967266
> [20245.092055] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967267
> [20271.055283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967266
> [20271.064573] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967265
> [20271.073864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967264
> [20271.083165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967263
> [20275.933633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967262
> [20275.942927] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967261
> [20275.952228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967260
> [20275.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967259
> [20280.815537] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967260
> [20280.816905] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967261
> [20442.104270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967260
> [20443.448533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967259
> [20445.747762] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967260
> [20445.747900] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967261
> [20445.747918] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967262
> [20445.748615] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967263
> [20494.667635] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967264
> [20494.667769] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967265
> [20524.978466] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967264
> [20524.987753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967263
> [20524.997049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967262
> [20525.006349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967261
> [20533.505202] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967260
> [20533.514488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967259
> [20535.464796] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967260
> [20535.465312] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967261
> [20547.361843] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967262
> [20547.362543] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967263
> [20547.362994] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967264
> [20565.098049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967263
> [20565.098051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967262
> [20565.098052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967261
> [20565.098054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967260
> [20565.098055] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967259
> [20565.099574] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
> [20565.099733] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
> [20565.099960] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
> [20609.002609] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
> [20609.011900] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
> [20609.021187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
> [20612.483895] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
> [20612.484023] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
> [20612.484674] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
> [20641.495298] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
> [20641.504590] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
> [20641.513885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-01  8:33                                                                         ` Christian Theune
@ 2024-11-03 15:54                                                                           ` Christian Theune
  2024-11-03 16:16                                                                             ` Dragan Milivojević
  2024-11-04 11:29                                                                           ` Yu Kuai
  2024-11-04 11:40                                                                           ` Yu Kuai
  2 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-03 15:54 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

running without the debug patch again on 6.11.5 I’m still able to reproduce with bitmap enabled. I’ve gathered the full list of all stuck tasks.

Just as a reminder on the setup, the layering here is:

nvme drives → mdraid → lvm → dmcrypt → xfs 

Any ideas how to get better debugging output for you different than the printk strategy as that seems to cause the (likely) race condition not to trigger … ?

Christian

The first message came in around:

[91177.798820] INFO: task kworker/u129:22:381874 blocked for more than 122 seconds.
[91177.807148]       Not tainted 6.11.5 #1-NixOS
[91177.812050] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[91177.820854] task:kworker/u129:22 state:D stack:0     pid:381874 tgid:381874 ppid:2      flags:0x00004000
[91177.820861] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[91177.820871] Call Trace:
[91177.820872]  <TASK>
[91177.820877]  __schedule+0x425/0x1460
[91177.820886]  schedule+0x27/0xf0
[91177.820889]  md_bitmap_startwrite+0x14f/0x1c0
[91177.820895]  ? __pfx_autoremove_wake_function+0x10/0x10
[91177.820901]  __add_stripe_bio+0x1f4/0x240 [raid456]
[91177.820908]  raid5_make_request+0x364/0x1290 [raid456]
[91177.820915]  ? srso_alias_return_thunk+0x5/0xfbef5
[91177.820918]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[91177.820923]  ? __pfx_woken_wake_function+0x10/0x10
[91177.820926]  ? bio_split_rw+0x141/0x2a0
[91177.820931]  md_handle_request+0x153/0x270
[91177.820935]  ? srso_alias_return_thunk+0x5/0xfbef5
[91177.820938]  __submit_bio+0x190/0x240
[91177.820942]  submit_bio_noacct_nocheck+0x19a/0x3c0
[91177.820945]  ? srso_alias_return_thunk+0x5/0xfbef5
[91177.820947]  ? submit_bio_noacct+0x47/0x5b0
[91177.820950]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[91177.820954]  process_one_work+0x18f/0x3b0
[91177.820959]  worker_thread+0x21f/0x330
[91177.820962]  ? __pfx_worker_thread+0x10/0x10
[91177.820964]  kthread+0xcd/0x100
[91177.820967]  ? __pfx_kthread+0x10/0x10
[91177.820970]  ret_from_fork+0x31/0x50
[91177.820974]  ? __pfx_kthread+0x10/0x10
[91177.820977]  ret_from_fork_asm+0x1a/0x30
[91177.820982]  </TASK>

And I received the current blocked state of all stuck processes just now:

[168277.817543] sysrq: Show Blocked State
[168277.822017] task:xfsaild/dm-4    state:D stack:0     pid:2118  tgid:2118  ppid:2      flags:0x00004000
[168277.822022] Call Trace:
[168277.822024]  <TASK>
[168277.822030]  __schedule+0x425/0x1460
[168277.822038]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822044]  schedule+0x27/0xf0
[168277.822046]  schedule_timeout+0x9e/0x170
[168277.822050]  ? __pfx_process_timeout+0x10/0x10
[168277.822057]  xfsaild+0xc0/0xa40 [xfs]
[168277.822158]  ? __pfx_xfsaild+0x10/0x10 [xfs]
[168277.822230]  ? __pfx_xfsaild+0x10/0x10 [xfs]
[168277.822299]  kthread+0xcd/0x100
[168277.822304]  ? __pfx_kthread+0x10/0x10
[168277.822307]  ret_from_fork+0x31/0x50
[168277.822311]  ? __pfx_kthread+0x10/0x10
[168277.822314]  ret_from_fork_asm+0x1a/0x30
[168277.822320]  </TASK>
[168277.822362] task:kworker/u129:22 state:D stack:0     pid:381874 tgid:381874 ppid:2      flags:0x00004000
[168277.822366] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.822373] Call Trace:
[168277.822374]  <TASK>
[168277.822376]  __schedule+0x425/0x1460
[168277.822382]  schedule+0x27/0xf0
[168277.822385]  md_bitmap_startwrite+0x14f/0x1c0
[168277.822389]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.822394]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.822402]  raid5_make_request+0x364/0x1290 [raid456]
[168277.822409]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822410]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.822415]  ? __pfx_woken_wake_function+0x10/0x10
[168277.822418]  ? bio_split_rw+0x141/0x2a0
[168277.822423]  md_handle_request+0x153/0x270
[168277.822428]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822430]  __submit_bio+0x190/0x240
[168277.822435]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.822438]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822439]  ? submit_bio_noacct+0x47/0x5b0
[168277.822443]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.822446]  process_one_work+0x18f/0x3b0
[168277.822451]  worker_thread+0x21f/0x330
[168277.822453]  ? __pfx_worker_thread+0x10/0x10
[168277.822455]  kthread+0xcd/0x100
[168277.822458]  ? __pfx_kthread+0x10/0x10
[168277.822460]  ret_from_fork+0x31/0x50
[168277.822463]  ? __pfx_kthread+0x10/0x10
[168277.822465]  ret_from_fork_asm+0x1a/0x30
[168277.822469]  </TASK>
[168277.822471] task:kworker/u129:13 state:D stack:0     pid:389003 tgid:389003 ppid:2      flags:0x00004000
[168277.822475] Workqueue: writeback wb_workfn (flush-253:4)
[168277.822479] Call Trace:
[168277.822480]  <TASK>
[168277.822482]  __schedule+0x425/0x1460
[168277.822485]  ? xfs_trans_add_item+0x37/0xb0 [xfs]
[168277.822566]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822570]  schedule+0x27/0xf0
[168277.822573]  schedule_timeout+0x15d/0x170
[168277.822576]  __down_common+0x119/0x220
[168277.822579]  ? xfs_buf_lock+0x31/0xe0 [xfs]
[168277.822675]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.822678]  down+0x47/0x60
[168277.822680]  xfs_buf_lock+0x31/0xe0 [xfs]
[168277.822752]  xfs_buf_find_lock+0x55/0x100 [xfs]
[168277.822822]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
[168277.822891]  ? down+0x1e/0x60
[168277.822895]  xfs_buf_read_map+0x62/0x2a0 [xfs]
[168277.822967]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[168277.823058]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[168277.823145]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[168277.823226]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[168277.823297]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
[168277.823368]  xfs_btree_lookup+0xea/0x500 [xfs]
[168277.823438]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.823442]  xfs_alloc_ag_vextent_locality+0xcc/0x450 [xfs]
[168277.823525]  xfs_alloc_ag_vextent_near+0x2cb/0x540 [xfs]
[168277.823598]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
[168277.823672]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
[168277.823760]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
[168277.823834]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
[168277.823915]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
[168277.823985]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
[168277.824059]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
[168277.824128]  xfs_map_blocks+0x257/0x420 [xfs]
[168277.824219]  iomap_writepages+0x271/0x9b0
[168277.824225]  xfs_vm_writepages+0x67/0x90 [xfs]
[168277.824298]  do_writepages+0x76/0x260
[168277.824304]  __writeback_single_inode+0x3d/0x350
[168277.824308]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824309]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824312]  writeback_sb_inodes+0x21c/0x4e0
[168277.824323]  __writeback_inodes_wb+0x4c/0xf0
[168277.824324]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824327]  wb_writeback+0x193/0x310
[168277.824330]  wb_workfn+0x357/0x450
[168277.824335]  process_one_work+0x18f/0x3b0
[168277.824338]  worker_thread+0x21f/0x330
[168277.824340]  ? __pfx_worker_thread+0x10/0x10
[168277.824342]  kthread+0xcd/0x100
[168277.824345]  ? __pfx_kthread+0x10/0x10
[168277.824347]  ret_from_fork+0x31/0x50
[168277.824350]  ? __pfx_kthread+0x10/0x10
[168277.824352]  ret_from_fork_asm+0x1a/0x30
[168277.824357]  </TASK>
[168277.824359] task:kworker/u129:8  state:D stack:0     pid:401440 tgid:401440 ppid:2      flags:0x00004000
[168277.824363] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824366] Call Trace:
[168277.824367]  <TASK>
[168277.824369]  __schedule+0x425/0x1460
[168277.824375]  schedule+0x27/0xf0
[168277.824377]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824380]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824383]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824390]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824396]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824398]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824401]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824403]  ? bio_split_rw+0x141/0x2a0
[168277.824408]  md_handle_request+0x153/0x270
[168277.824411]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824413]  __submit_bio+0x190/0x240
[168277.824417]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824420]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824422]  ? submit_bio_noacct+0x47/0x5b0
[168277.824425]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824429]  process_one_work+0x18f/0x3b0
[168277.824432]  worker_thread+0x21f/0x330
[168277.824434]  ? __pfx_worker_thread+0x10/0x10
[168277.824436]  kthread+0xcd/0x100
[168277.824438]  ? __pfx_kthread+0x10/0x10
[168277.824441]  ret_from_fork+0x31/0x50
[168277.824443]  ? __pfx_kthread+0x10/0x10
[168277.824446]  ret_from_fork_asm+0x1a/0x30
[168277.824450]  </TASK>
[168277.824451] task:kworker/u129:15 state:D stack:0     pid:401443 tgid:401443 ppid:2      flags:0x00004000
[168277.824454] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824457] Call Trace:
[168277.824458]  <TASK>
[168277.824460]  __schedule+0x425/0x1460
[168277.824462]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824465]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824467]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
[168277.824473]  schedule+0x27/0xf0
[168277.824476]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824478]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824481]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824486]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824492]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824494]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824497]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824499]  ? bio_split_rw+0x141/0x2a0
[168277.824504]  md_handle_request+0x153/0x270
[168277.824506]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824509]  __submit_bio+0x190/0x240
[168277.824513]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824516]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824517]  ? submit_bio_noacct+0x47/0x5b0
[168277.824521]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824524]  process_one_work+0x18f/0x3b0
[168277.824526]  worker_thread+0x21f/0x330
[168277.824529]  ? __pfx_worker_thread+0x10/0x10
[168277.824531]  kthread+0xcd/0x100
[168277.824533]  ? __pfx_kthread+0x10/0x10
[168277.824536]  ret_from_fork+0x31/0x50
[168277.824538]  ? __pfx_kthread+0x10/0x10
[168277.824540]  ret_from_fork_asm+0x1a/0x30
[168277.824544]  </TASK>
[168277.824546] task:kworker/u129:19 state:D stack:0     pid:401446 tgid:401446 ppid:2      flags:0x00004000
[168277.824549] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824552] Call Trace:
[168277.824553]  <TASK>
[168277.824555]  __schedule+0x425/0x1460
[168277.824560]  schedule+0x27/0xf0
[168277.824562]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824565]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824568]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824573]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824579]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824581]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824583]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824586]  ? bio_split_rw+0x141/0x2a0
[168277.824590]  md_handle_request+0x153/0x270
[168277.824593]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824595]  __submit_bio+0x190/0x240
[168277.824599]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824602]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824604]  ? submit_bio_noacct+0x47/0x5b0
[168277.824607]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824610]  process_one_work+0x18f/0x3b0
[168277.824617]  worker_thread+0x21f/0x330
[168277.824620]  ? __pfx_worker_thread+0x10/0x10
[168277.824622]  kthread+0xcd/0x100
[168277.824625]  ? __pfx_kthread+0x10/0x10
[168277.824628]  ret_from_fork+0x31/0x50
[168277.824630]  ? __pfx_kthread+0x10/0x10
[168277.824632]  ret_from_fork_asm+0x1a/0x30
[168277.824636]  </TASK>
[168277.824638] task:kworker/u129:27 state:D stack:0     pid:401450 tgid:401450 ppid:2      flags:0x00004000
[168277.824641] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824644] Call Trace:
[168277.824645]  <TASK>
[168277.824646]  __schedule+0x425/0x1460
[168277.824652]  schedule+0x27/0xf0
[168277.824655]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824657]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824660]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824666]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824672]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824673]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824677]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824679]  ? bio_split_rw+0x141/0x2a0
[168277.824684]  md_handle_request+0x153/0x270
[168277.824686]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824689]  __submit_bio+0x190/0x240
[168277.824693]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824696]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824698]  ? submit_bio_noacct+0x47/0x5b0
[168277.824701]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824704]  process_one_work+0x18f/0x3b0
[168277.824708]  worker_thread+0x21f/0x330
[168277.824710]  ? __pfx_worker_thread+0x10/0x10
[168277.824712]  kthread+0xcd/0x100
[168277.824714]  ? __pfx_kthread+0x10/0x10
[168277.824717]  ret_from_fork+0x31/0x50
[168277.824719]  ? __pfx_kthread+0x10/0x10
[168277.824722]  ret_from_fork_asm+0x1a/0x30
[168277.824726]  </TASK>
[168277.824727] task:kworker/u129:28 state:D stack:0     pid:401451 tgid:401451 ppid:2      flags:0x00004000
[168277.824731] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824733] Call Trace:
[168277.824735]  <TASK>
[168277.824736]  __schedule+0x425/0x1460
[168277.824742]  schedule+0x27/0xf0
[168277.824745]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824747]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824750]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824755]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824762]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824763]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824766]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824768]  ? bio_split_rw+0x141/0x2a0
[168277.824774]  md_handle_request+0x153/0x270
[168277.824776]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824779]  __submit_bio+0x190/0x240
[168277.824783]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824786]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824788]  ? submit_bio_noacct+0x47/0x5b0
[168277.824791]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824794]  process_one_work+0x18f/0x3b0
[168277.824797]  worker_thread+0x21f/0x330
[168277.824800]  ? __pfx_worker_thread+0x10/0x10
[168277.824802]  kthread+0xcd/0x100
[168277.824804]  ? __pfx_kthread+0x10/0x10
[168277.824807]  ret_from_fork+0x31/0x50
[168277.824809]  ? __pfx_kthread+0x10/0x10
[168277.824811]  ret_from_fork_asm+0x1a/0x30
[168277.824815]  </TASK>
[168277.824819] task:kworker/u129:4  state:D stack:0     pid:424186 tgid:424186 ppid:2      flags:0x00004000
[168277.824822] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.824824] Call Trace:
[168277.824826]  <TASK>
[168277.824827]  __schedule+0x425/0x1460
[168277.824833]  schedule+0x27/0xf0
[168277.824836]  md_bitmap_startwrite+0x14f/0x1c0
[168277.824838]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.824841]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.824847]  raid5_make_request+0x364/0x1290 [raid456]
[168277.824853]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824854]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.824857]  ? __pfx_woken_wake_function+0x10/0x10
[168277.824859]  ? bio_split_rw+0x141/0x2a0
[168277.824864]  md_handle_request+0x153/0x270
[168277.824867]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824869]  __submit_bio+0x190/0x240
[168277.824873]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.824877]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.824878]  ? submit_bio_noacct+0x47/0x5b0
[168277.824882]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.824885]  process_one_work+0x18f/0x3b0
[168277.824887]  worker_thread+0x21f/0x330
[168277.824890]  ? __pfx_worker_thread+0x10/0x10
[168277.824892]  kthread+0xcd/0x100
[168277.824895]  ? __pfx_kthread+0x10/0x10
[168277.824897]  ret_from_fork+0x31/0x50
[168277.824899]  ? __pfx_kthread+0x10/0x10
[168277.824902]  ret_from_fork_asm+0x1a/0x30
[168277.824907]  </TASK>
[168277.824908] task:kworker/u129:16 state:D stack:0     pid:424195 tgid:424195 ppid:2      flags:0x00004000
[168277.824911] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[168277.825002] Call Trace:
[168277.825003]  <TASK>
[168277.825005]  __schedule+0x425/0x1460
[168277.825008]  ? __blk_flush_plug+0xf5/0x150
[168277.825013]  schedule+0x27/0xf0
[168277.825016]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
[168277.825090]  ? __pfx_default_wake_function+0x10/0x10
[168277.825094]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
[168277.825165]  xlog_write+0x310/0x470 [xfs]
[168277.825237]  xlog_cil_push_work+0x6a5/0x880 [xfs]
[168277.825311]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825314]  process_one_work+0x18f/0x3b0
[168277.825317]  worker_thread+0x21f/0x330
[168277.825319]  ? __pfx_worker_thread+0x10/0x10
[168277.825321]  kthread+0xcd/0x100
[168277.825324]  ? __pfx_kthread+0x10/0x10
[168277.825326]  ret_from_fork+0x31/0x50
[168277.825329]  ? __pfx_kthread+0x10/0x10
[168277.825331]  ret_from_fork_asm+0x1a/0x30
[168277.825336]  </TASK>
[168277.825337] task:kworker/u129:29 state:D stack:0     pid:424199 tgid:424199 ppid:2      flags:0x00004000
[168277.825340] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825343] Call Trace:
[168277.825344]  <TASK>
[168277.825346]  __schedule+0x425/0x1460
[168277.825352]  schedule+0x27/0xf0
[168277.825355]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825357]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825361]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825367]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825373]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825375]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825378]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825380]  ? bio_split_rw+0x141/0x2a0
[168277.825385]  md_handle_request+0x153/0x270
[168277.825389]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825391]  __submit_bio+0x190/0x240
[168277.825395]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825398]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825400]  ? submit_bio_noacct+0x47/0x5b0
[168277.825403]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825406]  process_one_work+0x18f/0x3b0
[168277.825409]  worker_thread+0x21f/0x330
[168277.825411]  ? __pfx_worker_thread+0x10/0x10
[168277.825413]  kthread+0xcd/0x100
[168277.825416]  ? __pfx_kthread+0x10/0x10
[168277.825419]  ret_from_fork+0x31/0x50
[168277.825421]  ? __pfx_kthread+0x10/0x10
[168277.825423]  ret_from_fork_asm+0x1a/0x30
[168277.825428]  </TASK>
[168277.825429] task:kworker/u129:0  state:D stack:0     pid:425890 tgid:425890 ppid:2      flags:0x00004000
[168277.825432] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825435] Call Trace:
[168277.825436]  <TASK>
[168277.825438]  __schedule+0x425/0x1460
[168277.825440]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825445]  schedule+0x27/0xf0
[168277.825448]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825450]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825453]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825459]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825464]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825466]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825469]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825472]  ? bio_split_rw+0x141/0x2a0
[168277.825476]  md_handle_request+0x153/0x270
[168277.825479]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825481]  __submit_bio+0x190/0x240
[168277.825486]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825489]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825490]  ? submit_bio_noacct+0x47/0x5b0
[168277.825494]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825497]  process_one_work+0x18f/0x3b0
[168277.825500]  worker_thread+0x21f/0x330
[168277.825502]  ? __pfx_worker_thread+0x10/0x10
[168277.825504]  kthread+0xcd/0x100
[168277.825506]  ? __pfx_kthread+0x10/0x10
[168277.825509]  ret_from_fork+0x31/0x50
[168277.825512]  ? __pfx_kthread+0x10/0x10
[168277.825514]  ret_from_fork_asm+0x1a/0x30
[168277.825518]  </TASK>
[168277.825520] task:kworker/u129:1  state:D stack:0     pid:425891 tgid:425891 ppid:2      flags:0x00004000
[168277.825523] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825526] Call Trace:
[168277.825527]  <TASK>
[168277.825528]  __schedule+0x425/0x1460
[168277.825531]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825536]  schedule+0x27/0xf0
[168277.825538]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825541]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825544]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825549]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825555]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825557]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825560]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825563]  ? bio_split_rw+0x141/0x2a0
[168277.825568]  md_handle_request+0x153/0x270
[168277.825570]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825573]  __submit_bio+0x190/0x240
[168277.825577]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825580]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825582]  ? submit_bio_noacct+0x47/0x5b0
[168277.825585]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825588]  process_one_work+0x18f/0x3b0
[168277.825591]  worker_thread+0x21f/0x330
[168277.825593]  ? __pfx_worker_thread+0x10/0x10
[168277.825595]  kthread+0xcd/0x100
[168277.825598]  ? __pfx_kthread+0x10/0x10
[168277.825600]  ret_from_fork+0x31/0x50
[168277.825603]  ? __pfx_kthread+0x10/0x10
[168277.825605]  ret_from_fork_asm+0x1a/0x30
[168277.825610]  </TASK>
[168277.825611] task:kworker/u129:2  state:D stack:0     pid:425892 tgid:425892 ppid:2      flags:0x00004000
[168277.825618] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825621] Call Trace:
[168277.825622]  <TASK>
[168277.825623]  __schedule+0x425/0x1460
[168277.825629]  schedule+0x27/0xf0
[168277.825632]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825634]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825637]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825643]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825649]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825650]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825653]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825656]  ? bio_split_rw+0x141/0x2a0
[168277.825661]  md_handle_request+0x153/0x270
[168277.825664]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825666]  __submit_bio+0x190/0x240
[168277.825670]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825673]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825675]  ? submit_bio_noacct+0x47/0x5b0
[168277.825678]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825681]  process_one_work+0x18f/0x3b0
[168277.825684]  worker_thread+0x21f/0x330
[168277.825687]  ? __pfx_worker_thread+0x10/0x10
[168277.825689]  kthread+0xcd/0x100
[168277.825691]  ? __pfx_kthread+0x10/0x10
[168277.825694]  ret_from_fork+0x31/0x50
[168277.825696]  ? __pfx_kthread+0x10/0x10
[168277.825698]  ret_from_fork_asm+0x1a/0x30
[168277.825703]  </TASK>
[168277.825704] task:kworker/u129:3  state:D stack:0     pid:425893 tgid:425893 ppid:2      flags:0x00004000
[168277.825707] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825710] Call Trace:
[168277.825711]  <TASK>
[168277.825713]  __schedule+0x425/0x1460
[168277.825715]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825720]  schedule+0x27/0xf0
[168277.825722]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825725]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825728]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825733]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825739]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825741]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825744]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825746]  ? bio_split_rw+0x141/0x2a0
[168277.825751]  md_handle_request+0x153/0x270
[168277.825754]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825756]  __submit_bio+0x190/0x240
[168277.825760]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825763]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825765]  ? submit_bio_noacct+0x47/0x5b0
[168277.825768]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825771]  process_one_work+0x18f/0x3b0
[168277.825774]  worker_thread+0x21f/0x330
[168277.825776]  ? __pfx_worker_thread+0x10/0x10
[168277.825779]  kthread+0xcd/0x100
[168277.825781]  ? __pfx_kthread+0x10/0x10
[168277.825784]  ret_from_fork+0x31/0x50
[168277.825786]  ? __pfx_kthread+0x10/0x10
[168277.825788]  ret_from_fork_asm+0x1a/0x30
[168277.825792]  </TASK>
[168277.825794] task:kworker/u129:5  state:D stack:0     pid:425894 tgid:425894 ppid:2      flags:0x00004000
[168277.825797] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825799] Call Trace:
[168277.825800]  <TASK>
[168277.825802]  __schedule+0x425/0x1460
[168277.825805]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825809]  schedule+0x27/0xf0
[168277.825812]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825814]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825817]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825823]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825829]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825830]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825833]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825836]  ? bio_split_rw+0x141/0x2a0
[168277.825840]  md_handle_request+0x153/0x270
[168277.825843]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825846]  __submit_bio+0x190/0x240
[168277.825850]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825853]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825855]  ? submit_bio_noacct+0x47/0x5b0
[168277.825858]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825861]  process_one_work+0x18f/0x3b0
[168277.825864]  worker_thread+0x21f/0x330
[168277.825866]  ? __pfx_worker_thread+0x10/0x10
[168277.825868]  kthread+0xcd/0x100
[168277.825871]  ? __pfx_kthread+0x10/0x10
[168277.825873]  ret_from_fork+0x31/0x50
[168277.825876]  ? __pfx_kthread+0x10/0x10
[168277.825878]  ret_from_fork_asm+0x1a/0x30
[168277.825882]  </TASK>
[168277.825884] task:kworker/u129:6  state:D stack:0     pid:425895 tgid:425895 ppid:2      flags:0x00004000
[168277.825886] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825889] Call Trace:
[168277.825890]  <TASK>
[168277.825892]  __schedule+0x425/0x1460
[168277.825897]  schedule+0x27/0xf0
[168277.825900]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825903]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825905]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.825911]  raid5_make_request+0x364/0x1290 [raid456]
[168277.825917]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825919]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.825922]  ? __pfx_woken_wake_function+0x10/0x10
[168277.825924]  ? bio_split_rw+0x141/0x2a0
[168277.825929]  md_handle_request+0x153/0x270
[168277.825932]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825935]  __submit_bio+0x190/0x240
[168277.825939]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.825942]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.825944]  ? submit_bio_noacct+0x47/0x5b0
[168277.825947]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.825950]  process_one_work+0x18f/0x3b0
[168277.825953]  worker_thread+0x21f/0x330
[168277.825956]  ? __pfx_worker_thread+0x10/0x10
[168277.825958]  kthread+0xcd/0x100
[168277.825960]  ? __pfx_kthread+0x10/0x10
[168277.825962]  ret_from_fork+0x31/0x50
[168277.825965]  ? __pfx_kthread+0x10/0x10
[168277.825968]  ret_from_fork_asm+0x1a/0x30
[168277.825972]  </TASK>
[168277.825973] task:kworker/u129:7  state:D stack:0     pid:425896 tgid:425896 ppid:2      flags:0x00004000
[168277.825976] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.825979] Call Trace:
[168277.825980]  <TASK>
[168277.825982]  __schedule+0x425/0x1460
[168277.825987]  schedule+0x27/0xf0
[168277.825990]  md_bitmap_startwrite+0x14f/0x1c0
[168277.825992]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.825995]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826001]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826007]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826009]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826011]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826014]  ? bio_split_rw+0x141/0x2a0
[168277.826018]  md_handle_request+0x153/0x270
[168277.826021]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826024]  __submit_bio+0x190/0x240
[168277.826028]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826031]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826032]  ? submit_bio_noacct+0x47/0x5b0
[168277.826036]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826039]  process_one_work+0x18f/0x3b0
[168277.826042]  worker_thread+0x21f/0x330
[168277.826044]  ? __pfx_worker_thread+0x10/0x10
[168277.826046]  kthread+0xcd/0x100
[168277.826049]  ? __pfx_kthread+0x10/0x10
[168277.826051]  ret_from_fork+0x31/0x50
[168277.826054]  ? __pfx_kthread+0x10/0x10
[168277.826056]  ret_from_fork_asm+0x1a/0x30
[168277.826060]  </TASK>
[168277.826062] task:kworker/u129:9  state:D stack:0     pid:425897 tgid:425897 ppid:2      flags:0x00004000
[168277.826065] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826067] Call Trace:
[168277.826068]  <TASK>
[168277.826070]  __schedule+0x425/0x1460
[168277.826076]  schedule+0x27/0xf0
[168277.826078]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826081]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826084]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826090]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826096]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826098]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826100]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826103]  ? bio_split_rw+0x141/0x2a0
[168277.826107]  md_handle_request+0x153/0x270
[168277.826110]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826113]  __submit_bio+0x190/0x240
[168277.826117]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826120]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826122]  ? submit_bio_noacct+0x47/0x5b0
[168277.826125]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826128]  process_one_work+0x18f/0x3b0
[168277.826131]  worker_thread+0x21f/0x330
[168277.826133]  ? __pfx_worker_thread+0x10/0x10
[168277.826136]  kthread+0xcd/0x100
[168277.826138]  ? __pfx_kthread+0x10/0x10
[168277.826140]  ret_from_fork+0x31/0x50
[168277.826143]  ? __pfx_kthread+0x10/0x10
[168277.826145]  ret_from_fork_asm+0x1a/0x30
[168277.826150]  </TASK>
[168277.826151] task:kworker/u129:10 state:D stack:0     pid:425898 tgid:425898 ppid:2      flags:0x00004000
[168277.826154] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826156] Call Trace:
[168277.826157]  <TASK>
[168277.826159]  __schedule+0x425/0x1460
[168277.826165]  schedule+0x27/0xf0
[168277.826167]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826170]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826172]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826178]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826184]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826186]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826189]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826191]  ? bio_split_rw+0x141/0x2a0
[168277.826196]  md_handle_request+0x153/0x270
[168277.826199]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826201]  __submit_bio+0x190/0x240
[168277.826205]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826208]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826210]  ? submit_bio_noacct+0x47/0x5b0
[168277.826214]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826217]  process_one_work+0x18f/0x3b0
[168277.826219]  worker_thread+0x21f/0x330
[168277.826222]  ? __pfx_worker_thread+0x10/0x10
[168277.826224]  kthread+0xcd/0x100
[168277.826227]  ? __pfx_kthread+0x10/0x10
[168277.826229]  ret_from_fork+0x31/0x50
[168277.826231]  ? __pfx_kthread+0x10/0x10
[168277.826234]  ret_from_fork_asm+0x1a/0x30
[168277.826238]  </TASK>
[168277.826240] task:kworker/u129:11 state:D stack:0     pid:425899 tgid:425899 ppid:2      flags:0x00004000
[168277.826242] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826245] Call Trace:
[168277.826246]  <TASK>
[168277.826248]  __schedule+0x425/0x1460
[168277.826250]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826255]  schedule+0x27/0xf0
[168277.826257]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826260]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826263]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826269]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826274]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826277]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826279]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826282]  ? bio_split_rw+0x141/0x2a0
[168277.826286]  md_handle_request+0x153/0x270
[168277.826289]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826292]  __submit_bio+0x190/0x240
[168277.826296]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826298]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826301]  ? submit_bio_noacct+0x47/0x5b0
[168277.826304]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826307]  process_one_work+0x18f/0x3b0
[168277.826309]  worker_thread+0x21f/0x330
[168277.826312]  ? __pfx_worker_thread+0x10/0x10
[168277.826314]  kthread+0xcd/0x100
[168277.826316]  ? __pfx_kthread+0x10/0x10
[168277.826319]  ret_from_fork+0x31/0x50
[168277.826321]  ? __pfx_kthread+0x10/0x10
[168277.826324]  ret_from_fork_asm+0x1a/0x30
[168277.826328]  </TASK>
[168277.826330] task:kworker/u129:12 state:D stack:0     pid:425900 tgid:425900 ppid:2      flags:0x00004000
[168277.826332] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826335] Call Trace:
[168277.826336]  <TASK>
[168277.826337]  __schedule+0x425/0x1460
[168277.826343]  schedule+0x27/0xf0
[168277.826346]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826349]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826351]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826357]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826363]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826365]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826367]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826370]  ? bio_split_rw+0x141/0x2a0
[168277.826375]  md_handle_request+0x153/0x270
[168277.826377]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826380]  __submit_bio+0x190/0x240
[168277.826384]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826387]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826389]  ? submit_bio_noacct+0x47/0x5b0
[168277.826392]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826395]  process_one_work+0x18f/0x3b0
[168277.826398]  worker_thread+0x21f/0x330
[168277.826400]  ? __pfx_worker_thread+0x10/0x10
[168277.826402]  kthread+0xcd/0x100
[168277.826405]  ? __pfx_kthread+0x10/0x10
[168277.826408]  ret_from_fork+0x31/0x50
[168277.826410]  ? __pfx_kthread+0x10/0x10
[168277.826412]  ret_from_fork_asm+0x1a/0x30
[168277.826417]  </TASK>
[168277.826418] task:kworker/u129:14 state:D stack:0     pid:425901 tgid:425901 ppid:2      flags:0x00004000
[168277.826421] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826424] Call Trace:
[168277.826425]  <TASK>
[168277.826426]  __schedule+0x425/0x1460
[168277.826432]  schedule+0x27/0xf0
[168277.826435]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826438]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826441]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826447]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826453]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826455]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826458]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826460]  ? bio_split_rw+0x141/0x2a0
[168277.826465]  md_handle_request+0x153/0x270
[168277.826468]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826471]  __submit_bio+0x190/0x240
[168277.826475]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826478]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826480]  ? submit_bio_noacct+0x47/0x5b0
[168277.826483]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826486]  process_one_work+0x18f/0x3b0
[168277.826489]  worker_thread+0x21f/0x330
[168277.826491]  ? __pfx_worker_thread+0x10/0x10
[168277.826493]  ? __pfx_worker_thread+0x10/0x10
[168277.826495]  kthread+0xcd/0x100
[168277.826498]  ? __pfx_kthread+0x10/0x10
[168277.826501]  ret_from_fork+0x31/0x50
[168277.826503]  ? __pfx_kthread+0x10/0x10
[168277.826505]  ret_from_fork_asm+0x1a/0x30
[168277.826509]  </TASK>
[168277.826511] task:kworker/u129:18 state:D stack:0     pid:425924 tgid:425924 ppid:2      flags:0x00004000
[168277.826514] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826516] Call Trace:
[168277.826517]  <TASK>
[168277.826519]  __schedule+0x425/0x1460
[168277.826524]  schedule+0x27/0xf0
[168277.826527]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826530]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826533]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826538]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826544]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826546]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826549]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826551]  ? bio_split_rw+0x141/0x2a0
[168277.826556]  md_handle_request+0x153/0x270
[168277.826559]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826561]  __submit_bio+0x190/0x240
[168277.826565]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826569]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826570]  ? submit_bio_noacct+0x47/0x5b0
[168277.826573]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826577]  process_one_work+0x18f/0x3b0
[168277.826580]  worker_thread+0x21f/0x330
[168277.826582]  ? __pfx_worker_thread+0x10/0x10
[168277.826584]  kthread+0xcd/0x100
[168277.826586]  ? __pfx_kthread+0x10/0x10
[168277.826589]  ret_from_fork+0x31/0x50
[168277.826591]  ? __pfx_kthread+0x10/0x10
[168277.826594]  ret_from_fork_asm+0x1a/0x30
[168277.826598]  </TASK>
[168277.826600] task:kworker/u129:20 state:D stack:0     pid:425925 tgid:425925 ppid:2      flags:0x00004000
[168277.826602] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826605] Call Trace:
[168277.826606]  <TASK>
[168277.826608]  __schedule+0x425/0x1460
[168277.826617]  schedule+0x27/0xf0
[168277.826619]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826622]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826625]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826631]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826637]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826639]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826642]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826644]  ? bio_split_rw+0x141/0x2a0
[168277.826649]  md_handle_request+0x153/0x270
[168277.826652]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826655]  __submit_bio+0x190/0x240
[168277.826659]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826662]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826663]  ? submit_bio_noacct+0x47/0x5b0
[168277.826667]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826670]  process_one_work+0x18f/0x3b0
[168277.826673]  worker_thread+0x21f/0x330
[168277.826675]  ? __pfx_worker_thread+0x10/0x10
[168277.826677]  kthread+0xcd/0x100
[168277.826679]  ? __pfx_kthread+0x10/0x10
[168277.826682]  ret_from_fork+0x31/0x50
[168277.826684]  ? __pfx_kthread+0x10/0x10
[168277.826687]  ret_from_fork_asm+0x1a/0x30
[168277.826691]  </TASK>
[168277.826693] task:kworker/u129:21 state:D stack:0     pid:425926 tgid:425926 ppid:2      flags:0x00004000
[168277.826696] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826699] Call Trace:
[168277.826700]  <TASK>
[168277.826701]  __schedule+0x425/0x1460
[168277.826707]  schedule+0x27/0xf0
[168277.826710]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826712]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826715]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826720]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826727]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826728]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826731]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826734]  ? bio_split_rw+0x141/0x2a0
[168277.826738]  md_handle_request+0x153/0x270
[168277.826741]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826743]  __submit_bio+0x190/0x240
[168277.826748]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826751]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826753]  ? submit_bio_noacct+0x47/0x5b0
[168277.826756]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826759]  process_one_work+0x18f/0x3b0
[168277.826762]  worker_thread+0x21f/0x330
[168277.826764]  ? __pfx_worker_thread+0x10/0x10
[168277.826766]  kthread+0xcd/0x100
[168277.826768]  ? __pfx_kthread+0x10/0x10
[168277.826771]  ret_from_fork+0x31/0x50
[168277.826774]  ? __pfx_kthread+0x10/0x10
[168277.826776]  ret_from_fork_asm+0x1a/0x30
[168277.826780]  </TASK>
[168277.826782] task:kworker/u129:23 state:D stack:0     pid:425927 tgid:425927 ppid:2      flags:0x00004000
[168277.826784] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826787] Call Trace:
[168277.826788]  <TASK>
[168277.826790]  __schedule+0x425/0x1460
[168277.826795]  schedule+0x27/0xf0
[168277.826798]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826800]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826803]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826809]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826815]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826817]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826819]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826822]  ? bio_split_rw+0x141/0x2a0
[168277.826827]  md_handle_request+0x153/0x270
[168277.826830]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826832]  __submit_bio+0x190/0x240
[168277.826837]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826840]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826841]  ? submit_bio_noacct+0x47/0x5b0
[168277.826844]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826848]  process_one_work+0x18f/0x3b0
[168277.826851]  worker_thread+0x21f/0x330
[168277.826853]  ? __pfx_worker_thread+0x10/0x10
[168277.826855]  kthread+0xcd/0x100
[168277.826857]  ? __pfx_kthread+0x10/0x10
[168277.826860]  ret_from_fork+0x31/0x50
[168277.826862]  ? __pfx_kthread+0x10/0x10
[168277.826865]  ret_from_fork_asm+0x1a/0x30
[168277.826869]  </TASK>
[168277.826871] task:kworker/u129:25 state:D stack:0     pid:425929 tgid:425929 ppid:2      flags:0x00004000
[168277.826874] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826877] Call Trace:
[168277.826878]  <TASK>
[168277.826880]  __schedule+0x425/0x1460
[168277.826885]  schedule+0x27/0xf0
[168277.826887]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826890]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826893]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826898]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826904]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826906]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826909]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826911]  ? bio_split_rw+0x141/0x2a0
[168277.826916]  md_handle_request+0x153/0x270
[168277.826919]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826921]  __submit_bio+0x190/0x240
[168277.826925]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.826928]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826930]  ? submit_bio_noacct+0x47/0x5b0
[168277.826933]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.826936]  process_one_work+0x18f/0x3b0
[168277.826939]  worker_thread+0x21f/0x330
[168277.826942]  ? __pfx_worker_thread+0x10/0x10
[168277.826944]  kthread+0xcd/0x100
[168277.826946]  ? __pfx_kthread+0x10/0x10
[168277.826948]  ret_from_fork+0x31/0x50
[168277.826951]  ? __pfx_kthread+0x10/0x10
[168277.826954]  ret_from_fork_asm+0x1a/0x30
[168277.826958]  </TASK>
[168277.826959] task:kworker/u129:26 state:D stack:0     pid:425930 tgid:425930 ppid:2      flags:0x00004000
[168277.826962] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.826965] Call Trace:
[168277.826966]  <TASK>
[168277.826967]  __schedule+0x425/0x1460
[168277.826973]  schedule+0x27/0xf0
[168277.826975]  md_bitmap_startwrite+0x14f/0x1c0
[168277.826978]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.826981]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.826986]  raid5_make_request+0x364/0x1290 [raid456]
[168277.826992]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.826994]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.826997]  ? __pfx_woken_wake_function+0x10/0x10
[168277.826999]  ? bio_split_rw+0x141/0x2a0
[168277.827004]  md_handle_request+0x153/0x270
[168277.827007]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827010]  __submit_bio+0x190/0x240
[168277.827014]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827016]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827018]  ? submit_bio_noacct+0x47/0x5b0
[168277.827021]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827025]  process_one_work+0x18f/0x3b0
[168277.827028]  worker_thread+0x21f/0x330
[168277.827030]  ? __pfx_worker_thread+0x10/0x10
[168277.827032]  kthread+0xcd/0x100
[168277.827034]  ? __pfx_kthread+0x10/0x10
[168277.827037]  ret_from_fork+0x31/0x50
[168277.827039]  ? __pfx_kthread+0x10/0x10
[168277.827042]  ret_from_fork_asm+0x1a/0x30
[168277.827046]  </TASK>
[168277.827047] task:kworker/u129:30 state:D stack:0     pid:425931 tgid:425931 ppid:2      flags:0x00004000
[168277.827050] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827053] Call Trace:
[168277.827054]  <TASK>
[168277.827055]  __schedule+0x425/0x1460
[168277.827061]  schedule+0x27/0xf0
[168277.827064]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827066]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827069]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827075]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827081]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827083]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827086]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827088]  ? bio_split_rw+0x141/0x2a0
[168277.827093]  md_handle_request+0x153/0x270
[168277.827096]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827098]  __submit_bio+0x190/0x240
[168277.827102]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827105]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827108]  ? submit_bio_noacct+0x47/0x5b0
[168277.827111]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827114]  process_one_work+0x18f/0x3b0
[168277.827117]  worker_thread+0x21f/0x330
[168277.827119]  ? __pfx_worker_thread+0x10/0x10
[168277.827121]  kthread+0xcd/0x100
[168277.827124]  ? __pfx_kthread+0x10/0x10
[168277.827126]  ret_from_fork+0x31/0x50
[168277.827128]  ? __pfx_kthread+0x10/0x10
[168277.827131]  ret_from_fork_asm+0x1a/0x30
[168277.827136]  </TASK>
[168277.827137] task:kworker/u129:31 state:D stack:0     pid:425932 tgid:425932 ppid:2      flags:0x00004000
[168277.827139] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827142] Call Trace:
[168277.827143]  <TASK>
[168277.827145]  __schedule+0x425/0x1460
[168277.827151]  schedule+0x27/0xf0
[168277.827153]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827156]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827159]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827165]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827171]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827173]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827175]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827178]  ? bio_split_rw+0x141/0x2a0
[168277.827182]  md_handle_request+0x153/0x270
[168277.827185]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827188]  __submit_bio+0x190/0x240
[168277.827192]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827195]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827197]  ? submit_bio_noacct+0x47/0x5b0
[168277.827200]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827203]  process_one_work+0x18f/0x3b0
[168277.827206]  worker_thread+0x21f/0x330
[168277.827209]  ? __pfx_worker_thread+0x10/0x10
[168277.827210]  kthread+0xcd/0x100
[168277.827213]  ? __pfx_kthread+0x10/0x10
[168277.827215]  ret_from_fork+0x31/0x50
[168277.827218]  ? __pfx_kthread+0x10/0x10
[168277.827220]  ret_from_fork_asm+0x1a/0x30
[168277.827225]  </TASK>
[168277.827226] task:kworker/u129:32 state:D stack:0     pid:425933 tgid:425933 ppid:2      flags:0x00004000
[168277.827228] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827231] Call Trace:
[168277.827232]  <TASK>
[168277.827234]  __schedule+0x425/0x1460
[168277.827240]  schedule+0x27/0xf0
[168277.827242]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827245]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827247]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827254]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827259]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827261]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827264]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827267]  ? bio_split_rw+0x141/0x2a0
[168277.827271]  md_handle_request+0x153/0x270
[168277.827274]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827277]  __submit_bio+0x190/0x240
[168277.827281]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827284]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827285]  ? submit_bio_noacct+0x47/0x5b0
[168277.827289]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827292]  process_one_work+0x18f/0x3b0
[168277.827295]  worker_thread+0x21f/0x330
[168277.827297]  ? __pfx_worker_thread+0x10/0x10
[168277.827299]  kthread+0xcd/0x100
[168277.827302]  ? __pfx_kthread+0x10/0x10
[168277.827304]  ret_from_fork+0x31/0x50
[168277.827306]  ? __pfx_kthread+0x10/0x10
[168277.827309]  ret_from_fork_asm+0x1a/0x30
[168277.827313]  </TASK>
[168277.827315] task:kworker/u129:33 state:D stack:0     pid:427404 tgid:427404 ppid:2      flags:0x00004000
[168277.827318] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827320] Call Trace:
[168277.827322]  <TASK>
[168277.827324]  __schedule+0x425/0x1460
[168277.827329]  schedule+0x27/0xf0
[168277.827332]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827334]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827337]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827343]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827349]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827351]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827353]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827356]  ? bio_split_rw+0x141/0x2a0
[168277.827361]  md_handle_request+0x153/0x270
[168277.827364]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827366]  __submit_bio+0x190/0x240
[168277.827370]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827373]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827374]  ? submit_bio_noacct+0x47/0x5b0
[168277.827377]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827381]  process_one_work+0x18f/0x3b0
[168277.827383]  worker_thread+0x21f/0x330
[168277.827385]  ? __pfx_worker_thread+0x10/0x10
[168277.827388]  ? __pfx_worker_thread+0x10/0x10
[168277.827390]  kthread+0xcd/0x100
[168277.827392]  ? __pfx_kthread+0x10/0x10
[168277.827394]  ret_from_fork+0x31/0x50
[168277.827397]  ? __pfx_kthread+0x10/0x10
[168277.827399]  ret_from_fork_asm+0x1a/0x30
[168277.827404]  </TASK>
[168277.827407] task:rsync           state:D stack:0     pid:428253 tgid:428253 ppid:428251 flags:0x00004000
[168277.827409] Call Trace:
[168277.827410]  <TASK>
[168277.827412]  __schedule+0x425/0x1460
[168277.827415]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827419]  schedule+0x27/0xf0
[168277.827422]  schedule_timeout+0x9e/0x170
[168277.827424]  ? __pfx_process_timeout+0x10/0x10
[168277.827427]  io_schedule_timeout+0x51/0x70
[168277.827431]  balance_dirty_pages+0x3e7/0x9f0
[168277.827439]  balance_dirty_pages_ratelimited_flags+0x2a9/0x390
[168277.827442]  iomap_file_buffered_write+0x163/0x440
[168277.827450]  xfs_file_buffered_write+0x87/0x2a0 [xfs]
[168277.827537]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827540]  vfs_write+0x291/0x460
[168277.827546]  ksys_write+0x6f/0xf0
[168277.827549]  do_syscall_64+0xb7/0x200
[168277.827554]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[168277.827558] RIP: 0033:0x7f7b9e92c133
[168277.827560] RSP: 002b:00007fff07ae39f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[168277.827562] RAX: ffffffffffffffda RBX: 00000000369cf5c0 RCX: 00007f7b9e92c133
[168277.827564] RDX: 0000000000040000 RSI: 00000000369cf5c0 RDI: 0000000000000003
[168277.827565] RBP: 0000000000000003 R08: 0000000000020000 R09: 000000003698f5b0
[168277.827567] R10: 00007f7b9ed1b180 R11: 0000000000000246 R12: 0000000000040000
[168277.827568] R13: 0000000000020000 R14: 0000000000020000 R15: 0000000000000000
[168277.827572]  </TASK>
[168277.827573] task:kworker/u129:34 state:D stack:0     pid:428278 tgid:428278 ppid:2      flags:0x00004000
[168277.827576] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827579] Call Trace:
[168277.827580]  <TASK>
[168277.827582]  __schedule+0x425/0x1460
[168277.827586]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827588]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
[168277.827594]  schedule+0x27/0xf0
[168277.827597]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827600]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827603]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827609]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827619]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827621]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827623]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827626]  ? bio_split_rw+0x141/0x2a0
[168277.827631]  md_handle_request+0x153/0x270
[168277.827634]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827636]  __submit_bio+0x190/0x240
[168277.827640]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827644]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827646]  ? submit_bio_noacct+0x47/0x5b0
[168277.827649]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827652]  process_one_work+0x18f/0x3b0
[168277.827656]  worker_thread+0x21f/0x330
[168277.827658]  ? __pfx_worker_thread+0x10/0x10
[168277.827660]  ? __pfx_worker_thread+0x10/0x10
[168277.827662]  kthread+0xcd/0x100
[168277.827664]  ? __pfx_kthread+0x10/0x10
[168277.827667]  ret_from_fork+0x31/0x50
[168277.827670]  ? __pfx_kthread+0x10/0x10
[168277.827672]  ret_from_fork_asm+0x1a/0x30
[168277.827678]  </TASK>
[168277.827679] task:kworker/u129:35 state:D stack:0     pid:428482 tgid:428482 ppid:2      flags:0x00004000
[168277.827682] Workqueue: kcryptd-253:4-1 kcryptd_crypt [dm_crypt]
[168277.827685] Call Trace:
[168277.827686]  <TASK>
[168277.827687]  __schedule+0x425/0x1460
[168277.827693]  schedule+0x27/0xf0
[168277.827696]  md_bitmap_startwrite+0x14f/0x1c0
[168277.827698]  ? __pfx_autoremove_wake_function+0x10/0x10
[168277.827701]  __add_stripe_bio+0x1f4/0x240 [raid456]
[168277.827707]  raid5_make_request+0x364/0x1290 [raid456]
[168277.827713]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827715]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[168277.827718]  ? __pfx_woken_wake_function+0x10/0x10
[168277.827720]  ? bio_split_rw+0x141/0x2a0
[168277.827725]  md_handle_request+0x153/0x270
[168277.827728]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827731]  __submit_bio+0x190/0x240
[168277.827735]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.827738]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827740]  ? submit_bio_noacct+0x47/0x5b0
[168277.827743]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[168277.827747]  process_one_work+0x18f/0x3b0
[168277.827750]  worker_thread+0x21f/0x330
[168277.827751]  ? __pfx_worker_thread+0x10/0x10
[168277.827753]  ? __pfx_worker_thread+0x10/0x10
[168277.827755]  kthread+0xcd/0x100
[168277.827758]  ? __pfx_kthread+0x10/0x10
[168277.827760]  ret_from_fork+0x31/0x50
[168277.827762]  ? __pfx_kthread+0x10/0x10
[168277.827765]  ret_from_fork_asm+0x1a/0x30
[168277.827769]  </TASK>
[168277.827771] task:kworker/17:3    state:D stack:0     pid:438399 tgid:438399 ppid:2      flags:0x00004000
[168277.827774] Workqueue: xfs-sync/dm-4 xfs_log_worker [xfs]
[168277.827854] Call Trace:
[168277.827855]  <TASK>
[168277.827857]  __schedule+0x425/0x1460
[168277.827861]  ? ttwu_do_activate+0x64/0x210
[168277.827863]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827867]  schedule+0x27/0xf0
[168277.827870]  schedule_timeout+0x15d/0x170
[168277.827873]  __wait_for_common+0x90/0x1c0
[168277.827876]  ? __pfx_schedule_timeout+0x10/0x10
[168277.827880]  __flush_workqueue+0x158/0x440
[168277.827882]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.827886]  xlog_cil_push_now.isra.0+0x5e/0xa0 [xfs]
[168277.827958]  xlog_cil_force_seq+0x69/0x240 [xfs]
[168277.828029]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828031]  ? __schedule+0x42d/0x1460
[168277.828034]  xfs_log_force+0x7a/0x230 [xfs]
[168277.828105]  xfs_log_worker+0x3d/0xc0 [xfs]
[168277.828175]  process_one_work+0x18f/0x3b0
[168277.828178]  worker_thread+0x21f/0x330
[168277.828179]  ? __pfx_worker_thread+0x10/0x10
[168277.828182]  ? __pfx_worker_thread+0x10/0x10
[168277.828184]  kthread+0xcd/0x100
[168277.828186]  ? __pfx_kthread+0x10/0x10
[168277.828189]  ret_from_fork+0x31/0x50
[168277.828191]  ? __pfx_kthread+0x10/0x10
[168277.828193]  ret_from_fork_asm+0x1a/0x30
[168277.828198]  </TASK>
[168277.828218] task:kworker/u130:3  state:D stack:0     pid:781590 tgid:781590 ppid:2      flags:0x00004000
[168277.828221] Workqueue: writeback wb_workfn (flush-253:1)
[168277.828224] Call Trace:
[168277.828225]  <TASK>
[168277.828226]  ? __pfx_wbt_inflight_cb+0x10/0x10
[168277.828231]  __schedule+0x425/0x1460
[168277.828236]  ? __pfx_wbt_cleanup_cb+0x10/0x10
[168277.828238]  ? __pfx_wbt_inflight_cb+0x10/0x10
[168277.828240]  schedule+0x27/0xf0
[168277.828243]  io_schedule+0x46/0x70
[168277.828246]  rq_qos_wait+0xbe/0x130
[168277.828249]  ? __pfx_rq_qos_wake_function+0x10/0x10
[168277.828252]  ? __pfx_wbt_inflight_cb+0x10/0x10
[168277.828255]  wbt_wait+0xae/0x130
[168277.828259]  __rq_qos_throttle+0x24/0x40
[168277.828261]  blk_mq_submit_bio+0x1d2/0x780
[168277.828267]  __submit_bio+0x6c/0x240
[168277.828271]  submit_bio_noacct_nocheck+0x19a/0x3c0
[168277.828274]  ? submit_bio_noacct+0x47/0x5b0
[168277.828277]  iomap_submit_ioend+0x40/0x80
[168277.828280]  iomap_writepages+0x4be/0x9b0
[168277.828284]  xfs_vm_writepages+0x67/0x90 [xfs]
[168277.828370]  do_writepages+0x76/0x260
[168277.828373]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828375]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828376]  ? sched_clock_cpu+0xf/0x190
[168277.828379]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828382]  ? psi_group_change+0x138/0x350
[168277.828385]  __writeback_single_inode+0x3d/0x350
[168277.828388]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828389]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828392]  writeback_sb_inodes+0x21c/0x4e0
[168277.828395]  ? select_task_rq_fair+0x1d4/0x1d30
[168277.828405]  __writeback_inodes_wb+0x4c/0xf0
[168277.828407]  ? srso_alias_return_thunk+0x5/0xfbef5
[168277.828410]  wb_writeback+0x193/0x310
[168277.828414]  wb_workfn+0x2b1/0x450
[168277.828418]  process_one_work+0x18f/0x3b0
[168277.828421]  worker_thread+0x21f/0x330
[168277.828423]  ? __pfx_worker_thread+0x10/0x10
[168277.828425]  kthread+0xcd/0x100
[168277.828428]  ? __pfx_kthread+0x10/0x10
[168277.828430]  ret_from_fork+0x31/0x50
[168277.828433]  ? __pfx_kthread+0x10/0x10
[168277.828436]  ret_from_fork_asm+0x1a/0x30
[168277.828440]  </TASK>



> On 1. Nov 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
> 
> A thought about the high numbers: they look like relatively regular “many bits on” patterns:
> 
>>>> bin(4294967264)
> ‘0b11111111111111111111111111100000'
> 
> I think I enabled the bitmap online so MAYBE it circumvents initialisation of some memory instead of when this gets up during a regular boot. I’m not seeing those numbers anymore after booting cleanly.
> 
> I’m letting things run for a few hours again, but I’m still concerned that the printk overhead doesn’t allow the system to actually trigger the issue again.
> 
> If I’m not getting things stuck again, then I’ll double check to ensure that my reproduce is still valid on 6.11.5 without the debugging patch.
> 
> Christian
> 
>> On 1. Nov 2024, at 08:56, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>> ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.
>> 
>> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
>> 
>> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.
>> 
>> [  405.347345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22301786792+8) 4294967259
>> [  432.542465] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  432.542469] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.272964] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.273175] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.273189] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.273285] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967265
>> [  434.274063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.274066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.274070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.274073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.274078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  434.274083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967259
>> [  434.276609] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  434.278939] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967261
>> [  464.922354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  464.931833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967259
>> [  466.964557] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  466.964616] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  474.399930] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  474.451451] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967263
>> [  489.447079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  489.456574] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  489.466069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  489.475565] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967259
>> [  491.235517] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967260
>> [  491.235602] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967261
>> [  498.153108] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  498.156307] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967263
>> [  530.332619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  530.342110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967261
>> [  530.351595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967260
>> [  530.361082] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967259
>> [  535.176774] __add_stripe_bio: md127: start ff2721beec8c2fa0(24985208424+8) 4294967260
>> [  549.125326] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24985208424+8) 4294967259
>> [  549.635782] __add_stripe_bio: md127: start ff2721beec8c2fa0(25521770024+8) 4294967261
>> [  590.875593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967260
>> [  590.885081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967259
>> [  596.973863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967263
>> [  596.973866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967262
>> [  596.973869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967261
>> [  596.973871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967260
>> [  596.973881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967259
>> [  596.974557] __add_stripe_bio: md127: start ff2721beec8c2fa0(26325099752+8) 4294967260
>> [  637.646142] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26325099752+8) 4294967259
>> [  641.292887] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032741096+8) 4294967260
>> [  654.931195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032741096+8) 4294967259
>> [  654.933295] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967260
>> [  654.933570] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967261
>> [  654.935967] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967262
>> [  654.937411] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967263
>> [  683.008873] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967264
>> [  685.689494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967263
>> [  685.689496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967262
>> [  685.689498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967261
>> [  685.689499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967260
>> [  685.689501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967259
>> [  685.690260] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967260
>> [  685.692999] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967261
>> [  685.693119] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967262
>> [  685.693124] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967263
>> [  685.693427] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967264
>> [  685.693428] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967265
>> [  685.693517] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967266
>> [  685.693528] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967267
>> [  713.684556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967266
>> [  713.694044] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967265
>> [  713.703539] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967264
>> [  713.713026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967263
>> [  713.722512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967262
>> [  713.731996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967261
>> [  713.741480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967260
>> [  713.750962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967259
>> [  715.765954] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967260
>> [  715.766034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967261
>> [  715.766278] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967262
>> [  715.766305] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967263
>> [  715.766468] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967264
>> [  716.077253] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967265
>> [  731.258391] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  731.258401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967261
>> [  731.258584] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967262
>> [  731.258711] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967263
>> [  731.260991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967264
>> [  731.261318] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967265
>> [  731.261513] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967266
>> [  758.285428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967265
>> [  758.294912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967264
>> [  758.304396] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967263
>> [  758.313881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967262
>> [  758.323377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967261
>> [  758.332875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  758.342365] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
>> [  758.922198] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  780.668347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
>> [  780.957247] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967260
>> [  780.957393] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967261
>> [  780.957440] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967262
>> [  780.957616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967263
>> [  780.957675] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967264
>> [  780.957754] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967265
>> [  790.623177] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967266
>> [  828.374094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967265
>> [  828.383581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967264
>> [  828.393067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967263
>> [  828.402553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967262
>> [  828.412040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967261
>> [  828.421525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967260
>> [  828.431012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967259
>> [  830.477927] __add_stripe_bio: md127: start ff2721beec8c2fa0(13690207080+8) 4294967260
>> [  851.040449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(13690207080+8) 4294967259
>> [  851.762678] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967260
>> [  851.762837] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967261
>> [  851.762948] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967262
>> [  851.763032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967263
>> [  851.763068] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967264
>> [  851.763112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967265
>> [  851.763202] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967266
>> [  851.766405] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967267
>> [  851.768763] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967266
>> [  851.768766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967265
>> [  851.768768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967264
>> [  851.768770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967263
>> [  851.768773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967262
>> [  851.768775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967261
>> [  851.768778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967260
>> [  851.768780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967259
>> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967261
>> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967260
>> [  880.058982] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967262
>> [  880.059032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967263
>> [  880.059090] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967264
>> [  880.059140] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967265
>> [  880.059317] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967266
>> [  891.735497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967265
>> [  891.744974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967264
>> [  891.754455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967263
>> [  891.763939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967262
>> [  891.773422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967261
>> [  891.782975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967260
>> [  891.792469] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967259
>> [  897.108788] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967260
>> [  897.108789] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967261
>> [  897.108813] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967262
>> [  897.108823] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967263
>> [  903.693112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967264
>> [  904.663454] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967265
>> [  906.906830] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967260
>> [  906.908087] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967261
>> [  906.908508] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967262
>> [  906.910088] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967263
>> [  906.912093] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967264
>> [  906.912840] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967265
>> [  906.914294] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967266
>> [  906.914323] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967267
>> [  906.914806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967266
>> [  906.914808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967265
>> [  906.914809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967264
>> [  906.914810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967263
>> [  906.914811] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967262
>> [  906.914813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967261
>> [  906.914815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967260
>> [  906.914817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967259
>> [  934.849642] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861773736+8) 4294967261
>> [  934.854037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967260
>> [  934.854040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967259
>> [  934.855808] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967260
>> [  934.855945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967261
>> [  963.315203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967262
>> [  963.315320] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967263
>> [  963.315327] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967264
>> [  963.315499] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967265
>> [  982.866693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967264
>> [  982.876178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967263
>> [  982.885665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967262
>> [  982.895158] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967261
>> [  982.904644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967260
>> [  982.914129] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967259
>> [  990.121616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967260
>> [  990.121662] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967261
>> [  990.121768] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967262
>> [  990.121828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967263
>> [  990.121843] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967264
>> [ 1013.206756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967263
>> [ 1013.206757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967262
>> [ 1013.206758] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967261
>> [ 1013.206759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967260
>> [ 1013.224363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967259
>> [ 1032.134913] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967260
>> [ 1032.134928] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967261
>> [ 1032.135028] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967262
>> [ 1032.135078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967263
>> [ 1041.027196] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967264
>> [ 1041.027321] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967265
>> [ 1041.027485] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967266
>> [ 1057.623365] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967267
>> [ 1076.893035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967266
>> [ 1076.902520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967265
>> [ 1076.912004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967264
>> [ 1076.921490] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967263
>> [ 1076.930986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967262
>> [ 1076.940475] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967261
>> [ 1076.949962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967260
>> [ 1076.959446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967259
>> [ 1077.721459] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967260
>> [ 1077.721615] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967261
>> [ 1077.721706] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967262
>> [ 1077.721739] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967263
>> [ 1077.721765] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967264
>> [ 1110.833257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967263
>> [ 1110.842743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967262
>> [ 1110.852225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967261
>> [ 1110.861709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967260
>> [ 1110.871194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967259
>> [ 1112.052569] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967260
>> [ 1112.052666] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967261
>> [ 1112.052695] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967262
>> [ 1112.052727] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967263
>> [ 1112.052778] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967264
>> [ 1112.053637] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967265
>> [ 1112.053649] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967266
>> [ 1173.829738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967265
>> [ 1173.839223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967264
>> [ 1173.848709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967263
>> [ 1173.858195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967262
>> [ 1173.867683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967261
>> [ 1173.877167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967260
>> [ 1173.886654] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967259
>> [ 1176.428651] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967260
>> [ 1176.428940] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967263
>> [ 1176.428942] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967264
>> [ 1176.428939] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967262
>> [ 1176.428903] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967261
>> [ 1176.429040] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967265
>> [ 1191.700497] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967265
>> [ 1191.704134] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967266
>> [ 1191.704199] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967267
>> [ 1191.705804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967266
>> [ 1191.705808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967265
>> [ 1191.705809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967264
>> [ 1191.705812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967263
>> [ 1191.705815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967262
>> [ 1191.705817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967261
>> [ 1191.705819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967260
>> [ 1191.705821] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967259
>> [ 1191.810863] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792488+8) 4294967260
>> [ 1244.235788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27917293544+8) 4294967260
>> [ 1309.535319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967264
>> [ 1309.544810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967263
>> [ 1309.554303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967262
>> [ 1309.563787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967261
>> [ 1309.573272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967260
>> [ 1309.582759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967259
>> [ 1314.950362] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967262
>> [ 1314.950455] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967263
>> [ 1314.950457] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967264
>> [ 1314.950470] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967265
>> [ 1345.736319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967264
>> [ 1345.745804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967263
>> [ 1345.755290] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967262
>> [ 1345.764773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967261
>> [ 1345.774264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967260
>> [ 1345.783759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967259
>> [ 1346.823135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28462541160+8) 4294967259
>> [ 1346.824776] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967260
>> [ 1346.824799] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967261
>> [ 1346.824806] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967262
>> [ 1346.824922] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967263
>> [ 1346.825566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967264
>> [ 1373.560546] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814568+8) 4294967260
>> [ 1431.650090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861814568+8) 4294967259
>> [ 1468.944088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967265
>> [ 1468.953581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967264
>> [ 1468.963067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967263
>> [ 1468.972552] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967262
>> [ 1468.982036] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967261
>> [ 1468.991524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967260
>> [ 1469.001009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967259
>> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967261
>> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967260
>> [ 1474.904634] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967262
>> [ 1474.904752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967263
>> [ 1474.904798] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967264
>> [ 1474.904805] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967265
>> [ 1477.837716] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967266
>> [ 1479.836591] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967260
>> [ 1479.858896] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967261
>> [ 1479.859238] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967262
>> [ 1479.859525] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967263
>> [ 1479.859669] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967264
>> [ 1479.859897] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967265
>> [ 1479.860071] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967266
>> [ 1507.386887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967265
>> [ 1507.396375] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967264
>> [ 1507.405858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967263
>> [ 1507.415343] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967262
>> [ 1507.424831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967261
>> [ 1507.434322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967260
>> [ 1507.443826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967259
>> [ 1569.056325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967264
>> [ 1569.065837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967263
>> [ 1569.075325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967262
>> [ 1569.084816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967261
>> [ 1569.094308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967260
>> [ 1569.103801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967259
>> [ 1571.985752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967260
>> [ 1571.985858] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967261
>> [ 1571.985888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967263
>> [ 1571.985864] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967262
>> [ 1571.985962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967264
>> [ 1571.986450] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967265
>> [ 1582.882338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967264
>> [ 1582.882340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967263
>> [ 1582.882342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967262
>> [ 1582.882344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967261
>> [ 1582.882345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967260
>> [ 1582.882346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967259
>> [ 1582.884560] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967260
>> [ 1582.884860] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967261
>> [ 1582.884880] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967262
>> [ 1582.885034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967263
>> [ 1582.885126] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967264
>> [ 1582.885164] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967265
>> [ 1675.519030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967264
>> [ 1675.528518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967263
>> [ 1675.538016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967262
>> [ 1675.547513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967261
>> [ 1675.556999] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967260
>> [ 1675.566485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967259
>> [ 1682.250399] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967260
>> [ 1682.250639] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967261
>> [ 1682.250690] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967262
>> [ 1682.250718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967263
>> [ 1682.250974] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967264
>> [ 1682.251078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967265
>> [ 1682.251306] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967266
>> [ 1704.298207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967265
>> [ 1704.298211] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967264
>> [ 1704.298214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967263
>> [ 1704.298216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967262
>> [ 1704.298218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967261
>> [ 1704.298220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967260
>> [ 1704.298222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967259
>> [ 1704.299566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967260
>> [ 1704.299718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967261
>> [ 1704.299758] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967262
>> [ 1704.299834] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967263
>> [ 1704.299888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967264
>> [ 1704.304001] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967265
>> [ 1704.304132] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967266
>> [ 1772.283712] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967264
>> [ 1772.293200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967263
>> [ 1772.302685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967262
>> [ 1772.312169] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967261
>> [ 1772.321652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967260
>> [ 1772.331135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967259
>> [ 1776.549687] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967260
>> [ 1776.549697] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967261
>> [ 1776.549898] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967263
>> [ 1776.549945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967264
>> [ 1776.549962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967265
>> [ 1776.549828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967262
>> [ 1776.550033] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967266
>> [ 1776.550080] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967267
>> [ 1781.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967266
>> [ 1782.080461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967265
>> [ 1782.199404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967264
>> [ 1782.318346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967263
>> [ 1782.438150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967262
>> [ 1782.557963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967261
>> [ 1782.677762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967260
>> [ 1782.797570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967259
>> [ 1786.992892] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967261
>> [ 1786.992878] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967260
>> [ 1786.993259] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967262
>> [ 1786.993401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967263
>> [ 1786.993449] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967264
>> [ 1795.858021] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967266
>> [ 1795.858009] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967265
>> [ 1795.858180] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967267
>> [ 1805.164880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967266
>> [ 1805.174370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967265
>> [ 1805.183853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967264
>> [ 1805.193339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967263
>> [ 1805.202828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967262
>> [ 1805.212314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967261
>> [ 1805.221800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967260
>> [ 1805.231291] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967259
>> [ 1807.730968] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967261
>> [ 1807.730937] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967260
>> [ 1807.731203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967262
>> [ 1807.731267] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967263
>> [ 1807.731406] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967264
>> [ 1807.731542] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967265
>> [ 1807.731764] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967266
>> [ 1893.800189] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967263
>> [ 1893.809691] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967262
>> [ 1893.819186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967261
>> [ 1893.828675] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967260
>> [ 1893.838165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967259
>> [ 1897.304170] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967260
>> [ 1897.304333] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967261
>> [ 1897.304579] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967262
>> [ 1897.304721] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967263
>> [ 1897.304812] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967264
>> [ 1897.304978] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967265
>> [ 1910.883901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861873064+8) 4294967259
>> [ 1910.888991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967262
>> [ 1910.888995] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967264
>> [ 1910.888988] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967261
>> [ 1910.888993] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967263
>> [ 1910.888986] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967260
>> [ 1990.952649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967264
>> [ 1990.952651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967263
>> [ 1990.952653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967262
>> [ 1990.952655] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967261
>> [ 1990.952657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967260
>> [ 1990.952659] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967259
>> [ 1990.957010] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967260
>> [ 1990.957011] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967261
>> [ 1990.957016] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967262
>> [ 1990.957020] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967263
>> [ 2021.437780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967262
>> [ 2021.437782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967261
>> [ 2021.437783] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967260
>> [ 2021.437785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967259
>> [ 2021.442407] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967260
>> [ 2021.443820] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967261
>> [ 2045.539668] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967262
>> [ 2045.540142] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967263
>> [ 2045.540232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967264
>> [ 2045.540262] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967265
>> [ 2050.125201] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967266
>> [ 2057.875279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967265
>> [ 2057.884767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967264
>> [ 2057.894262] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967263
>> [ 2057.903753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967262
>> [ 2057.913237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967261
>> [ 2057.922722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967260
>> [ 2057.932205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967259
>> [ 2059.233074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967260
>> [ 2059.233116] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967261
>> [ 2059.233120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967262
>> [ 2059.233171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967263
>> [ 2059.233632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967264
>> [ 2059.233684] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967265
>> [ 2059.235328] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967266
>> [ 2059.235336] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967267
>> [ 2059.238433] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967266
>> [ 2059.238435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967265
>> [ 2059.238436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967264
>> [ 2059.238437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967263
>> [ 2059.238439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967262
>> [ 2059.238440] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967261
>> [ 2059.238441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967260
>> [ 2059.238443] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967259
>> [ 2090.648331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967260
>> [ 2090.648399] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967261
>> [ 2090.648402] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967262
>> [ 2090.648414] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967263
>> [ 2090.648428] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967264
>> [ 2090.648540] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967265
>> [ 2090.651017] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967266
>> [ 2090.700177] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967267
>> [ 2118.173167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967266
>> [ 2118.182657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967265
>> [ 2118.192147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967264
>> [ 2118.201638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967263
>> [ 2118.211138] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967262
>> [ 2118.220626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967261
>> [ 2118.230111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967260
>> [ 2118.239602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967259
>> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967260
>> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967261
>> [ 2119.232691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967262
>> [ 2119.232707] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967263
>> [ 2119.232880] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967264
>> [ 2119.232926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967265
>> [ 2119.232990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967266
>> [ 2119.233054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967267
>> [ 2146.417917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967266
>> [ 2151.981747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967265
>> [ 2152.121412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967264
>> [ 2152.261047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967263
>> [ 2152.400699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967262
>> [ 2152.540356] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967261
>> [ 2152.680005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967260
>> [ 2152.819653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967259
>> [ 2157.037069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967260
>> [ 2157.037363] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967261
>> [ 2157.037405] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967262
>> [ 2157.037421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967263
>> [ 2157.037446] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967264
>> [ 2214.237201] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967263
>> [ 2214.246685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967262
>> [ 2214.256174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967261
>> [ 2214.265665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967260
>> [ 2214.275152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967259
>> [ 2220.022835] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967260
>> [ 2220.022859] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967261
>> [ 2220.022876] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967262
>> [ 2220.022912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967263
>> [ 2220.023258] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967265
>> [ 2220.023161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967264
>> [ 2243.495792] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130495656+8) 4294967264
>> [ 2272.045830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967265
>> [ 2272.045833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967264
>> [ 2272.045835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967263
>> [ 2272.045837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967262
>> [ 2272.045838] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967261
>> [ 2272.045840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967260
>> [ 2272.045841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967259
>> [ 2302.557785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967266
>> [ 2302.557787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967265
>> [ 2302.557789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967264
>> [ 2302.557791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967263
>> [ 2302.557793] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967262
>> [ 2302.557796] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967261
>> [ 2302.557797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967260
>> [ 2302.557799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967259
>> [ 2302.561904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967260
>> [ 2302.561926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967261
>> [ 2302.561957] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967263
>> [ 2302.561933] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967262
>> [ 2302.562006] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967264
>> [ 2302.562203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967265
>> [ 2302.562232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967266
>> [ 2302.562597] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967267
>> [ 2329.647721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967266
>> [ 2329.738196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967265
>> [ 2329.828677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967264
>> [ 2329.919153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967263
>> [ 2330.009644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967262
>> [ 2330.100125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967261
>> [ 2330.190603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967260
>> [ 2330.281085] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967259
>> [ 2332.172188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967265
>> [ 2332.172198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967264
>> [ 2332.172233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967263
>> [ 2332.172242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967262
>> [ 2332.172255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967261
>> [ 2332.172264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967260
>> [ 2332.172278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967259
>> [ 2332.178323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967262
>> [ 2332.178317] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967261
>> [ 2332.178310] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967260
>> [ 2332.178326] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967263
>> [ 2332.178394] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967264
>> [ 2332.178580] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967265
>> [ 2332.178600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967266
>> [ 2332.178697] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967267
>> [ 2358.527771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967266
>> [ 2358.657096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967265
>> [ 2358.786383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967264
>> [ 2358.915693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967263
>> [ 2359.044994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967262
>> [ 2359.174296] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967261
>> [ 2359.303592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967260
>> [ 2359.432875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967259
>> [ 2367.401519] __add_stripe_bio: md127: start ff2721beec8c2fa0(27111972904+8) 4294967260
>> [ 2367.410065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27111972904+8) 4294967259
>> [ 2399.790368] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967260
>> [ 2403.855440] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967261
>> [ 2403.855574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967262
>> [ 2403.855636] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967263
>> [ 2403.855687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967264
>> [ 2478.513548] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967263
>> [ 2478.523034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967262
>> [ 2478.532518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967261
>> [ 2478.542003] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967260
>> [ 2478.551487] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967259
>> [ 2483.294420] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967264
>> [ 2483.294422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967263
>> [ 2483.294423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967262
>> [ 2483.294425] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967261
>> [ 2483.294426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967260
>> [ 2483.294428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967259
>> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967260
>> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967261
>> [ 2515.139584] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967262
>> [ 2515.139592] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967264
>> [ 2515.139587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967263
>> [ 2515.139593] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967265
>> [ 2515.139795] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967266
>> [ 2556.408113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967264
>> [ 2556.417600] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967263
>> [ 2556.427088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967262
>> [ 2556.436578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967261
>> [ 2556.446066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967260
>> [ 2556.455551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967259
>> [ 2559.815547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967260
>> [ 2559.815563] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967261
>> [ 2559.815793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967262
>> [ 2559.815874] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967263
>> [ 2559.816031] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967264
>> [ 2568.256111] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967265
>> [ 2568.256157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967266
>> [ 2619.422458] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967265
>> [ 2619.431942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967264
>> [ 2619.441449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967263
>> [ 2619.450948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967262
>> [ 2619.460435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967261
>> [ 2619.469921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967260
>> [ 2619.479412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967259
>> [ 2633.472211] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967261
>> [ 2633.472205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967260
>> [ 2633.472305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967262
>> [ 2633.472427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967264
>> [ 2633.472417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967263
>> [ 2633.472587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967265
>> [ 2633.539700] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967266
>> [ 2661.223491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967265
>> [ 2661.223493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967264
>> [ 2661.223494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967263
>> [ 2661.223496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967262
>> [ 2661.223497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967261
>> [ 2661.223498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967260
>> [ 2661.223500] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967259
>> [ 2661.228040] __add_stripe_bio: md127: start ff2721beec8c2fa0(539699176+8) 4294967260
>> [ 2706.576782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(539699176+8) 4294967259
>> [ 2709.937157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967260
>> [ 2709.937171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967261
>> [ 2709.937316] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967262
>> [ 2709.937650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967263
>> [ 2709.937717] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967264
>> [ 2709.937724] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967265
>> [ 2709.937725] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967266
>> [ 2709.937737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967267
>> [ 2721.462599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967266
>> [ 2721.610877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967265
>> [ 2721.759136] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967264
>> [ 2721.907488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967263
>> [ 2722.055771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967262
>> [ 2722.204050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967261
>> [ 2722.352331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967260
>> [ 2722.500592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967259
>> [ 2724.772625] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967260
>> [ 2724.772751] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967261
>> [ 2724.772938] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967262
>> [ 2724.772988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967263
>> [ 2724.773119] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967264
>> [ 2754.673788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967263
>> [ 2754.673790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967262
>> [ 2754.673791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967261
>> [ 2754.673794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967260
>> [ 2754.673795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967259
>> [ 2785.394053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967261
>> [ 2785.394056] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967263
>> [ 2785.394059] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967265
>> [ 2785.394054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967262
>> [ 2785.394058] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967264
>> [ 2785.394050] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967260
>> [ 2785.463401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967266
>> [ 2785.472247] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967267
>> [ 2787.076731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967266
>> [ 2787.076732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967265
>> [ 2787.076734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967264
>> [ 2787.076735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967263
>> [ 2787.076736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967262
>> [ 2787.076738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967261
>> [ 2787.076739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967260
>> [ 2787.076740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967259
>> [ 2808.905214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967265
>> [ 2808.905333] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967266
>> [ 2808.905388] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967267
>> [ 2808.906939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967266
>> [ 2808.906941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967265
>> [ 2808.906943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967264
>> [ 2808.906944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967263
>> [ 2808.906946] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967262
>> [ 2808.906947] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967261
>> [ 2808.906950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967260
>> [ 2808.906952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967259
>> [ 2836.311276] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130552808+8) 4294967260
>> [ 2854.798417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
>> [ 2856.543067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967264
>> [ 2856.543070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967263
>> [ 2856.543073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967262
>> [ 2856.543075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967261
>> [ 2856.543077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967260
>> [ 2856.543079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
>> [ 2856.546312] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967260
>> [ 2856.546314] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967261
>> [ 2856.546421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967262
>> [ 2856.546509] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967263
>> [ 2856.546926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967264
>> [ 2886.489550] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967265
>> [ 2886.489595] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967266
>> [ 2886.489713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967267
>> [ 2897.617989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967266
>> [ 2897.627477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967265
>> [ 2897.636962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967264
>> [ 2897.646444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967263
>> [ 2897.655920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967262
>> [ 2897.665409] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967261
>> [ 2897.674910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967260
>> [ 2897.684404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967259
>> [ 2899.844282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967266
>> [ 2899.844316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967265
>> [ 2899.844354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967264
>> [ 2899.844382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967263
>> [ 2899.844423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967262
>> [ 2899.844462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967261
>> [ 2899.844516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967260
>> [ 2899.844570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967259
>> [ 2899.845690] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967260
>> [ 2899.845966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967261
>> [ 2899.846019] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967262
>> [ 2899.846062] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967263
>> [ 2899.846186] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967264
>> [ 2899.846207] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967265
>> [ 2899.846260] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967266
>> [ 2952.891498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967265
>> [ 2952.900984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967264
>> [ 2952.910478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967263
>> [ 2952.919966] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967262
>> [ 2952.929461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967261
>> [ 2952.938950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967260
>> [ 2952.948431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967259
>> [ 2955.316494] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967260
>> [ 2955.316704] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967261
>> [ 2955.316809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967262
>> [ 2955.316988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967263
>> [ 2955.317105] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967264
>> [ 2987.714377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967263
>> [ 2987.714379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967262
>> [ 2987.714381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967261
>> [ 2987.714383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967260
>> [ 2987.714385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967259
>> [ 2987.719137] __add_stripe_bio: md127: start ff2721beec8c2fa0(1092459240+8) 4294967260
>> [ 3047.110275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967264
>> [ 3047.110276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967263
>> [ 3047.110277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967262
>> [ 3047.110278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967261
>> [ 3047.110279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967260
>> [ 3047.110281] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967259
>> [ 3047.112501] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967260
>> [ 3047.112711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967261
>> [ 3047.112750] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967262
>> [ 3070.186991] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967263
>> [ 3070.187120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967264
>> [ 3110.127145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967263
>> [ 3110.136633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967262
>> [ 3110.146121] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967261
>> [ 3110.155611] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967260
>> [ 3110.165103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967259
>> [ 3113.362512] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967260
>> [ 3113.362527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967261
>> [ 3113.362737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967262
>> [ 3113.362788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967264
>> [ 3113.362772] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967263
>> [ 3113.363524] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967265
>> [ 3181.040787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967264
>> [ 3181.050278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967263
>> [ 3181.059767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967262
>> [ 3181.069267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967261
>> [ 3181.078760] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967260
>> [ 3181.088248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967259
>> [ 3190.006331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967260
>> [ 3190.006353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967261
>> [ 3190.006523] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967262
>> [ 3190.006526] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967263
>> [ 3190.006576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967264
>> [ 3190.006604] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967265
>> [ 3190.006676] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967266
>> [ 3222.489157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130590248+8) 4294967259
>> [ 3222.494665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967260
>> [ 3222.494810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967261
>> [ 3222.495400] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967262
>> [ 3222.495460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967263
>> [ 3222.496203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967264
>> [ 3222.496266] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967265
>> [ 3249.542100] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967264
>> [ 3249.542102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967263
>> [ 3249.542103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967262
>> [ 3249.542105] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967261
>> [ 3249.542107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967260
>> [ 3249.542109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967259
>> [ 3249.547575] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130590440+8) 4294967260
>> [ 3298.070385] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967260
>> [ 3298.070466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967261
>> [ 3298.070767] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967262
>> [ 3298.070824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967263
>> [ 3298.070896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967264
>> [ 3351.191989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967263
>> [ 3351.201478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967262
>> [ 3351.210961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967261
>> [ 3351.220447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967260
>> [ 3351.229931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967259
>> [ 3354.186090] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967260
>> [ 3354.186174] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967261
>> [ 3354.186453] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967262
>> [ 3354.186600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967263
>> [ 3354.186610] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967264
>> [ 3354.186666] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967265
>> [ 3354.186682] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967266
>> [ 3395.962921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967265
>> [ 3395.972408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967264
>> [ 3395.981893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967263
>> [ 3395.991379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967262
>> [ 3396.000863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967261
>> [ 3396.010344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967260
>> [ 3396.019828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967259
>> [ 3397.783940] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967260
>> [ 3397.783984] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967261
>> [ 3397.784015] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967262
>> [ 3397.784039] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967263
>> [ 3397.784102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967264
>> [ 3397.784112] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967265
>> [ 3397.784206] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967266
>> [ 3397.784239] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967267
>> [ 3407.297456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967266
>> [ 3407.515478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967265
>> [ 3407.733622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967264
>> [ 3407.952516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967263
>> [ 3408.171430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967262
>> [ 3408.390355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967261
>> [ 3408.609270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967260
>> [ 3408.828226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967259
>> [ 3410.118755] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967260
>> [ 3410.118912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967261
>> [ 3410.119044] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967262
>> [ 3410.119183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967263
>> [ 3410.119190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967264
>> [ 3410.119398] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967265
>> [ 3474.070919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967264
>> [ 3474.080400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967263
>> [ 3474.089879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967262
>> [ 3474.099370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967261
>> [ 3474.108856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967260
>> [ 3474.118342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967259
>> [ 3477.161290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967261
>> [ 3477.161249] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967260
>> [ 3477.161478] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967262
>> [ 3477.161505] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967263
>> [ 3487.704510] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967264
>> [ 3487.704567] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967265
>> [ 3510.992084] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967264
>> [ 3510.992086] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967263
>> [ 3510.992088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967262
>> [ 3510.992089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967261
>> [ 3510.992090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967260
>> [ 3510.992091] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967259
>> [ 3510.992993] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3510.993007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3550.083557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3550.083561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
>> [ 3555.089305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3555.089386] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3555.089618] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967262
>> [ 3555.089635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967263
>> [ 3555.089655] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967264
>> [ 3572.781054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967263
>> [ 3572.790547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967262
>> [ 3572.800034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3572.809523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3572.819005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
>> [ 3589.172647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967266
>> [ 3589.172750] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967265
>> [ 3589.172860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967264
>> [ 3589.172991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967263
>> [ 3589.173151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967262
>> [ 3589.173314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967261
>> [ 3589.173391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967260
>> [ 3589.173461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967259
>> [ 3589.175972] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743848+8) 4294967260
>> [ 3621.769395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032743848+8) 4294967259
>> [ 3623.304014] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399975272+8) 4294967260
>> [ 3686.974958] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399986216+8) 4294967267
>> [ 3722.135513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967266
>> [ 3722.144998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967265
>> [ 3722.154484] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967264
>> [ 3722.163972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967263
>> [ 3722.173455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967262
>> [ 3722.182939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967261
>> [ 3722.192426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967260
>> [ 3722.201912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967259
>> [ 3729.633922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967260
>> [ 3729.634199] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967261
>> [ 3729.634228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967262
>> [ 3729.634351] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967263
>> [ 3729.634466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967264
>> [ 3737.013926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967265
>> [ 3737.016635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967266
>> [ 3761.542817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967265
>> [ 3761.542819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967264
>> [ 3761.542820] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967263
>> [ 3761.542822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967262
>> [ 3761.542824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967261
>> [ 3761.542826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967260
>> [ 3761.542827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967259
>> [ 3761.545145] __add_stripe_bio: md127: start ff2721beec8c2fa0(4298178536+8) 4294967260
>> [ 3781.220916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(4298178536+8) 4294967259
>> [ 3816.363850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967261
>> [ 3816.363852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967260
>> [ 3816.363853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967259
>> [ 3816.366775] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967260
>> [ 3816.367295] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967261
>> [ 3816.367301] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967262
>> [ 3816.367544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967263
>> [ 3816.367693] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967264
>> [ 3816.368092] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967265
>> [ 3843.302810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967266
>> [ 3869.720089] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967261
>> [ 3869.720097] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967262
>> [ 3869.720194] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967263
>> [ 3869.720213] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967264
>> [ 3869.725214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967265
>> [ 3911.191478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967264
>> [ 3911.200970] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967263
>> [ 3911.210456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967262
>> [ 3911.219942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967261
>> [ 3911.229426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967260
>> [ 3911.238911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967259
>> [ 3914.293028] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967264
>> [ 3914.293031] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967263
>> [ 3914.293033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967262
>> [ 3914.293035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967261
>> [ 3914.293038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967260
>> [ 3914.293040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967259
>> [ 3914.295622] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967261
>> [ 3914.295641] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967262
>> [ 3914.295643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967263
>> [ 3914.295621] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967260
>> [ 3914.295871] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967264
>> [ 3999.621383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967263
>> [ 3999.630885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967262
>> [ 3999.640370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967261
>> [ 3999.649865] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967260
>> [ 3999.659351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967259
>> [ 4004.391868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967260
>> [ 4004.391913] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967261
>> [ 4004.392117] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967262
>> [ 4004.392564] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967263
>> [ 4004.392579] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967264
>> [ 4004.392650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967265
>> [ 4004.392858] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967266
>> [ 4074.054694] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967265
>> [ 4074.064183] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967264
>> [ 4074.073668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967263
>> [ 4074.083155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967262
>> [ 4074.092647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967261
>> [ 4074.102149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967260
>> [ 4074.111642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967259
>> [ 4114.890444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967265
>> [ 4114.890445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967264
>> [ 4114.890446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967263
>> [ 4114.890447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967262
>> [ 4114.890448] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967261
>> [ 4114.890449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967260
>> [ 4114.890450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967259
>> [ 4114.894014] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743784+8) 4294967260
>> [ 4137.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967260
>> [ 4137.422134] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967261
>> [ 4137.422222] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967262
>> [ 4137.422380] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967263
>> [ 4137.422447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967264
>> [ 4137.422527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967265
>> [ 4137.422809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967266
>> [ 4195.441417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967265
>> [ 4195.450902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967264
>> [ 4195.460386] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967263
>> [ 4195.469873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967262
>> [ 4195.479360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967261
>> [ 4195.488848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967260
>> [ 4195.498336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967259
>> [ 4205.104432] __add_stripe_bio: md127: start ff2721beec8c2fa0(6458132264+8) 4294967260
>> [ 4270.444837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6458132264+8) 4294967259
>> [ 4282.274860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967265
>> [ 4282.274879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967264
>> [ 4282.274897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967263
>> [ 4282.274916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967262
>> [ 4282.274936] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967261
>> [ 4282.274955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967260
>> [ 4282.274975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967259
>> [ 4282.276460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967260
>> [ 4282.276797] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967261
>> [ 4282.276964] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967262
>> [ 4282.277061] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967263
>> [ 4282.277143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967264
>> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967266
>> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967265
>> [ 4282.277271] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967267
>> [ 4282.321155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967266
>> [ 4282.387435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967265
>> [ 4282.448733] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967264
>> [ 4282.448742] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967263
>> [ 4282.448751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967262
>> [ 4282.448759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967261
>> [ 4282.448767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967260
>> [ 4282.448775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967259
>> [ 4315.061841] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967260
>> [ 4315.061855] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967262
>> [ 4315.061844] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967261
>> [ 4315.061924] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967263
>> [ 4315.061976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967264
>> [ 4315.063503] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967265
>> [ 4382.511212] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967264
>> [ 4382.520702] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967263
>> [ 4382.530180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967262
>> [ 4382.539665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967261
>> [ 4382.549163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967260
>> [ 4382.558657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967259
>> [ 4387.176732] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967260
>> [ 4387.176821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967261
>> [ 4387.176898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967262
>> [ 4387.177030] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967263
>> [ 4387.177229] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967264
>> [ 4387.177270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967265
>> [ 4457.957710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967264
>> [ 4457.957715] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967263
>> [ 4457.957719] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967262
>> [ 4457.957723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967261
>> [ 4457.957727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967260
>> [ 4457.957731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967259
>> [ 4457.961270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967260
>> [ 4457.961406] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967261
>> [ 4457.961619] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967262
>> [ 4457.961651] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967264
>> [ 4457.961806] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967265
>> [ 4457.961645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967263
>> [ 4485.589613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967264
>> [ 4485.589615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967263
>> [ 4485.589616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967262
>> [ 4485.589617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967261
>> [ 4485.589618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967260
>> [ 4485.589619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967259
>> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967260
>> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967261
>> [ 4485.593288] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967262
>> [ 4485.593416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967263
>> [ 4485.593532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967264
>> [ 4485.593652] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967265
>> [ 4485.593678] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967266
>> [ 4485.850480] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967267
>> [ 4515.537222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967266
>> [ 4515.537223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967265
>> [ 4515.537224] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967264
>> [ 4515.537226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967263
>> [ 4515.537227] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967262
>> [ 4515.537228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967261
>> [ 4515.537229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967260
>> [ 4515.537231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967259
>> [ 4515.539155] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967260
>> [ 4515.539253] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967261
>> [ 4515.539324] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967262
>> [ 4515.539513] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967263
>> [ 4515.539522] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967264
>> [ 4543.939187] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967265
>> [ 4567.298898] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967264
>> [ 4567.308382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967263
>> [ 4567.317870] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967262
>> [ 4567.327355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967261
>> [ 4567.336851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967260
>> [ 4567.346342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967259
>> [ 4574.769978] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967260
>> [ 4574.770644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967261
>> [ 4574.770713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967262
>> [ 4585.659234] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967263
>> [ 4585.659638] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967264
>> [ 4585.659851] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967265
>> [ 4628.062519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967264
>> [ 4628.062521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967263
>> [ 4628.062522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967262
>> [ 4628.062524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967261
>> [ 4628.062525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967260
>> [ 4628.062526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967259
>> [ 4628.067547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967260
>> [ 4628.067553] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967262
>> [ 4628.067549] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967261
>> [ 4628.067556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967263
>> [ 4628.067558] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967264
>> [ 4628.067643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967265
>> [ 4628.067650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967266
>> [ 4655.735972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967266
>> [ 4655.738016] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967267
>> [ 4655.740269] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967266
>> [ 4655.740270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967265
>> [ 4655.740271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967264
>> [ 4655.740273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967263
>> [ 4655.740274] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967262
>> [ 4655.740275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967261
>> [ 4655.740277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967260
>> [ 4655.740278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967259
>> [ 4655.744826] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967260
>> [ 4655.745042] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967261
>> [ 4655.745074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967262
>> [ 4655.745162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967263
>> [ 4684.693786] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967264
>> [ 4707.657198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967263
>> [ 4707.666685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967262
>> [ 4707.676172] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967261
>> [ 4707.685664] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967260
>> [ 4707.695155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967259
>> [ 4714.104353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967260
>> [ 4714.104370] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967261
>> [ 4714.104532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967262
>> [ 4714.104556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967263
>> [ 4714.104738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967264
>> [ 4714.104749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967265
>> [ 4714.104870] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967266
>> [ 4767.415146] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967265
>> [ 4767.424642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967264
>> [ 4767.434126] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967263
>> [ 4767.443610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967262
>> [ 4767.453092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967261
>> [ 4767.462576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967260
>> [ 4767.472063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967259
>> [ 4772.506161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967260
>> [ 4772.506215] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967261
>> [ 4772.506341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967262
>> [ 4772.506665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967263
>> [ 4772.506877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967264
>> [ 4772.507010] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967265
>> [ 4788.406891] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967266
>> [ 4841.522571] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967260
>> [ 4841.523093] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967261
>> [ 4907.596094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967260
>> [ 4907.596096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967259
>> [ 4907.596899] __add_stripe_bio: md127: start ff2721beec8c2fa0(27651747176+8) 4294967260
>> [ 4973.644083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27651747176+8) 4294967259
>> [ 4983.401423] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 4983.401434] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 4983.401439] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967262
>> [ 4983.412449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 4983.412450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 4983.412452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
>> [ 5009.844830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967263
>> [ 5009.844831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967262
>> [ 5009.844832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 5009.844834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 5009.844835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
>> [ 5036.841498] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967260
>> [ 5036.841516] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967261
>> [ 5036.841589] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967262
>> [ 5036.841711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967263
>> [ 5036.841866] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967264
>> [ 5036.842021] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967265
>> [ 5065.748527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967264
>> [ 5065.758011] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967263
>> [ 5065.767510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967262
>> [ 5065.776985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967261
>> [ 5065.786473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967260
>> [ 5065.795964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967259
>> [ 5069.198341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967260
>> [ 5069.198447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967261
>> [ 5069.198475] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967262
>> [ 5069.198565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967263
>> [ 5069.198600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967264
>> [ 5069.198657] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967265
>> [ 5069.198719] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967266
>> [ 5069.198749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967267
>> [ 5158.739304] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967265
>> [ 5158.748804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967264
>> [ 5158.758295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967263
>> [ 5158.767784] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967262
>> [ 5158.777267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967261
>> [ 5158.786749] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967260
>> [ 5158.796231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967259
>> [ 5174.398776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967260
>> [ 5174.398877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967261
>> [ 5174.398898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967262
>> [ 5174.398990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967263
>> [ 5174.399068] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967264
>> [ 5174.399165] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967265
>> [ 5215.123596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967264
>> [ 5215.133088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967263
>> [ 5215.142587] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967262
>> [ 5215.152066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967261
>> [ 5215.161549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967260
>> [ 5215.171035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967259
>> [ 5223.954743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967260
>> [ 5223.954779] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967261
>> [ 5223.954951] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967262
>> [ 5223.955007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967263
>> [ 5223.955223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967264
>> [ 5223.955228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967265
>> [ 5223.955230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967266
>> [ 5223.955472] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967267
>> [ 5259.738014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967266
>> [ 5260.031050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967265
>> [ 5260.324115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967264
>> [ 5260.617186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967263
>> [ 5260.910270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967262
>> [ 5261.203354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967261
>> [ 5261.496406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967260
>> [ 5261.789480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967259
>> [ 5265.172862] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967260
>> [ 5265.173244] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967261
>> [ 5265.173322] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967262
>> [ 5265.173763] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967263
>> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967265
>> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967264
>> [ 5265.173928] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967266
>> [ 5265.173973] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967267
>> [ 5294.960280] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967266
>> [ 5295.249813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967265
>> [ 5295.539340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967264
>> [ 5295.829752] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967263
>> [ 5296.120184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967262
>> [ 5296.410623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967261
>> [ 5296.701004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967260
>> [ 5296.991418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967259
>> [ 5298.908411] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967260
>> [ 5298.908458] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967261
>> [ 5298.908544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967262
>> [ 5298.908650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967263
>> [ 5298.908710] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967264
>> [ 5298.909051] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967265
>> [ 5413.856013] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667058856+8) 4294967266
>> [ 5446.671677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967265
>> [ 5446.671679] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967264
>> [ 5446.671680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967263
>> [ 5446.671681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967262
>> [ 5446.671682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967261
>> [ 5446.671683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967260
>> [ 5446.671684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967259
>> [ 5479.015532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967260
>> [ 5479.015632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967261
>> [ 5479.015683] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967262
>> [ 5479.015735] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967263
>> [ 5492.731793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967264
>> [ 5492.731911] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967265
>> [ 5506.007986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967264
>> [ 5506.007988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967263
>> [ 5506.007991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967262
>> [ 5506.007994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967261
>> [ 5506.007997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967260
>> [ 5506.008000] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967259
>> [ 5506.011738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967260
>> [ 5506.011854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967261
>> [ 5506.011861] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967262
>> [ 5506.012133] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967263
>> [ 5506.012143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967264
>> [ 5555.890832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967263
>> [ 5555.900322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967262
>> [ 5555.909852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967261
>> [ 5555.919336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967260
>> [ 5555.928823] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967259
>> [ 5574.002280] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967260
>> [ 5574.002313] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967261
>> [ 5574.002403] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967262
>> [ 5574.002468] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967263
>> [ 5574.002561] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967264
>> [ 5574.002645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967265
>> [ 5606.796975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967264
>> [ 5606.796977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967263
>> [ 5606.796978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967262
>> [ 5606.796979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967261
>> [ 5606.796981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967260
>> [ 5606.796982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967259
>> [ 5606.798208] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967260
>> [ 5606.798527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967261
>> [ 5606.798585] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967262
>> [ 5606.798607] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967263
>> [ 5606.803857] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967264
>> [ 5606.804282] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967265
>> [ 5639.962722] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967266
>> [ 5652.645345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967265
>> [ 5652.654833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967264
>> [ 5652.664323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967263
>> [ 5652.673815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967262
>> [ 5652.683294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967261
>> [ 5652.692781] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967260
>> [ 5654.603101] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967259
>> [ 5654.613230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967260
>> [ 5654.613572] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967261
>> [ 5654.613687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967262
>> [ 5654.613814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967263
>> [ 5654.614055] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967264
>> [ 5683.045381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967263
>> [ 5683.045383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967262
>> [ 5683.045385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967261
>> [ 5683.045387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967260
>> [ 5683.045388] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967259
>> [ 5683.048586] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967260
>> [ 5683.048965] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967261
>> [ 5683.049073] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967262
>> [ 5683.049140] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967263
>> [ 5683.049162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967264
>> [ 5683.049196] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967265
>> [ 5683.049256] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967266
>> [ 5683.474474] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967267
>> [ 5723.855633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967266
>> [ 5724.027114] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967265
>> [ 5724.198621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967264
>> [ 5724.370117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967263
>> [ 5724.541614] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967262
>> [ 5724.713065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967261
>> [ 5724.884518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967260
>> [ 5725.055989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967259
>> [ 5730.407790] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967260
>> [ 5730.407821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967261
>> [ 5730.407937] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967263
>> [ 5730.407896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967262
>> [ 5730.408159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967265
>> [ 5730.408143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967264
>> [ 5730.408353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967266
>> [ 5758.122868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967260
>> [ 5758.123578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967261
>> [ 5758.123627] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967262
>> [ 5758.130240] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967263
>> [ 5758.130420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967264
>> [ 5758.130534] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967265
>> [ 5846.663594] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967264
>> [ 5846.673081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967263
>> [ 5846.682568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967262
>> [ 5846.692061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967261
>> [ 5846.701549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967260
>> [ 5846.711038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967259
>> [ 5911.098887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967265
>> [ 5911.108378] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967264
>> [ 5911.117873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967263
>> [ 5911.127372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967262
>> [ 5911.136857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967261
>> [ 5911.146341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967260
>> [ 5911.155824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967259
>> [ 5928.422955] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967260
>> [ 5928.422960] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967261
>> [ 5928.422966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967262
>> [ 5928.422974] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967264
>> [ 5928.422972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967263
>> [ 6014.681387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967265
>> [ 6014.690871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967264
>> [ 6014.700360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967263
>> [ 6014.709859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967262
>> [ 6014.719355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967261
>> [ 6014.728844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967260
>> [ 6014.738332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967259
>> [ 6056.379708] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967265
>> [ 6056.389200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967264
>> [ 6056.398687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967263
>> [ 6056.408178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967262
>> [ 6056.417667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967261
>> [ 6056.427155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967260
>> [ 6056.436640] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967259
>> [ 6063.789506] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967260
>> [ 6063.789738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967261
>> [ 6063.789989] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967262
>> [ 6063.790034] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967263
>> [ 6063.790190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967264
>> [ 6063.790287] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967265
>> [ 6078.230963] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967266
>> [ 6105.353125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967265
>> [ 6105.362612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967264
>> [ 6105.372093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967263
>> [ 6105.381577] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967262
>> [ 6105.391064] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967261
>> [ 6105.400555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967260
>> [ 6105.410041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967259
>> [ 6128.587354] __add_stripe_bio: md127: start ff2721beec8c2fa0(27920223080+8) 4294967260
>> [ 6201.907683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27920223080+8) 4294967259
>> [ 6210.113728] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032744680+8) 4294967259
>> [ 6283.753223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967264
>> [ 6283.762710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967263
>> [ 6283.772194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967262
>> [ 6283.781677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967261
>> [ 6283.791163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967260
>> [ 6283.800647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967259
>> [ 6294.185205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967260
>> [ 6294.185349] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967261
>> [ 6354.564956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967266
>> [ 6354.565008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967265
>> [ 6354.565050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967264
>> [ 6354.565103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967263
>> [ 6354.565143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967262
>> [ 6354.565198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967261
>> [ 6354.565250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967260
>> [ 6354.565295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967259
>> [ 6354.571582] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967260
>> [ 6354.571613] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967261
>> [ 6354.571614] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967262
>> [ 6354.572095] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967263
>> [ 6381.572101] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967264
>> [ 6381.572150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967265
>> [ 6381.572462] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967266
>> [ 6417.668789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967265
>> [ 6417.678285] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967264
>> [ 6417.687773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967263
>> [ 6417.697266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967262
>> [ 6417.706822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967261
>> [ 6417.716318] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967260
>> [ 6417.725807] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967259
>> [ 6442.242691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967260
>> [ 6442.242776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967261
>> [ 6442.242901] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967262
>> [ 6442.242998] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967263
>> [ 6442.243060] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967264
>> [ 6442.243109] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967265
>> [ 6487.368252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967264
>> [ 6487.368256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967263
>> [ 6487.384984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967262
>> [ 6487.401709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967261
>> [ 6487.418441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967260
>> [ 6487.418447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967259
>> [ 6512.350543] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967260
>> [ 6512.351290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967261
>> [ 6512.351395] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967262
>> [ 6512.351419] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967263
>> [ 6512.351565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967264
>> [ 6512.351578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967265
>> [ 6512.351611] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967266
>> [ 6558.339111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967265
>> [ 6558.339113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967264
>> [ 6558.339115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967263
>> [ 6558.339118] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967262
>> [ 6558.339120] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967261
>> [ 6558.339122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967260
>> [ 6558.339123] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967259
>> [ 6604.012612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967265
>> [ 6604.012615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967264
>> [ 6604.012617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967263
>> [ 6604.012619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967262
>> [ 6604.012622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967261
>> [ 6604.012624] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967260
>> [ 6604.012626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967259
>> [ 6636.116612] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967260
>> [ 6636.117018] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967261
>> [ 6636.117064] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967262
>> [ 6636.117191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967263
>> [ 6636.117217] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967265
>> [ 6636.117204] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967264
>> [ 6636.117365] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967266
>> [ 6697.656762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967265
>> [ 6697.666250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967264
>> [ 6697.675731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967263
>> [ 6697.685213] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967262
>> [ 6697.694703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967261
>> [ 6697.704188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967260
>> [ 6697.713685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967259
>> [ 6699.748818] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967260
>> [ 6699.749045] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967261
>> [ 6699.749350] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967262
>> [ 6699.749488] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967264
>> [ 6699.749487] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967263
>> [ 6699.749673] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967265
>> [ 6700.169570] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967266
>> [ 6714.982644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967264
>> [ 6714.982749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967265
>> [ 6752.225916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967264
>> [ 6752.235410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967263
>> [ 6752.244901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967262
>> [ 6752.254387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967261
>> [ 6752.263875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967260
>> [ 6752.273361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967259
>> [ 6763.509990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967260
>> [ 6763.510135] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967261
>> [ 6763.510150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967262
>> [ 6763.510183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967263
>> [ 6763.510242] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967264
>> [ 6763.510270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967265
>> [ 6763.512906] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967266
>> [ 6823.022727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967265
>> [ 6823.022730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967264
>> [ 6823.022731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967263
>> [ 6823.022734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967262
>> [ 6823.022735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967261
>> [ 6823.022736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967260
>> [ 6823.022738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967259
>> [ 6823.024701] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967260
>> [ 6823.024824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967261
>> [ 6823.025069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967263
>> [ 6823.024976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967262
>> [ 6823.025323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967264
>> [ 6823.025427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967265
>> [ 6929.234367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967264
>> [ 6929.243863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967263
>> [ 6929.253358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967262
>> [ 6929.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967261
>> [ 6929.272333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967260
>> [ 6929.281822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967259
>> [ 6930.403685] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967260
>> [ 6930.403904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967261
>> [ 6930.404088] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967262
>> [ 6930.404223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967263
>> [ 6930.404286] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967264
>> [ 6930.404292] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967265
>> [ 6994.814514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967264
>> [ 6994.824001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967263
>> [ 6994.833494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967262
>> [ 6994.842983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967261
>> [ 6994.852473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967260
>> [ 6994.861960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967259
>> [ 6997.854357] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935572712+8) 4294967265
>> [ 7031.426286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967264
>> [ 7031.435774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967263
>> [ 7031.452511] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967262
>> [ 7031.468341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967261
>> [ 7031.484182] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967260
>> [ 7039.434351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967259
>> [ 7045.236931] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967260
>> [ 7045.237482] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967261
>> [ 7045.237696] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967262
>> [ 7045.237743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967263
>> [ 7056.937353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967264
>> [ 7056.937578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967265
>> [ 7056.940551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967264
>> [ 7056.940553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967263
>> [ 7056.940554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967262
>> [ 7056.940555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967261
>> [ 7056.940556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967260
>> [ 7056.940557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967259
>> [ 7083.864814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967260
>> [ 7083.865053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967262
>> [ 7083.865036] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967261
>> [ 7083.865102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967263
>> [ 7083.865159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967264
>> [ 7083.964009] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967265
>> [ 7095.497485] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967266
>> [ 7155.158072] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967265
>> [ 7155.158073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967264
>> [ 7155.158074] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967263
>> [ 7155.158076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967262
>> [ 7155.158077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967261
>> [ 7155.158078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967260
>> [ 7155.158079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967259
>> [ 7155.165525] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967260
>> [ 7180.285881] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967261
>> [ 7183.167275] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967262
>> [ 7183.414146] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967263
>> [ 7224.653276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967265
>> [ 7224.662765] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967264
>> [ 7224.672249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967263
>> [ 7224.681736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967262
>> [ 7224.691228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967261
>> [ 7224.700720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967260
>> [ 7224.710207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967259
>> [ 7229.399854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967260
>> [ 7229.399922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967261
>> [ 7229.400041] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967262
>> [ 7229.400099] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967263
>> [ 7229.400157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967264
>> [ 7229.400221] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967265
>> [ 7288.006416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967260
>> [ 7288.006417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967261
>> [ 7288.006420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967262
>> [ 7288.006422] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967263
>> [ 7288.006605] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967264
>> [ 7288.006752] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967265
>> [ 7288.006975] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967266
>> [ 7353.182856] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967261
>> [ 7353.182854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967260
>> [ 7353.182949] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967262
>> [ 7353.183001] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967263
>> [ 7353.183401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967264
>> [ 7353.183737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967266
>> [ 7353.183726] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967265
>> [ 7353.184047] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967267
>> [ 7443.628841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967264
>> [ 7443.638347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967263
>> [ 7443.647826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967262
>> [ 7443.657311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967261
>> [ 7443.666797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967260
>> [ 7443.676282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967259
>> [ 7501.172830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967263
>> [ 7501.182322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967262
>> [ 7501.191809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967261
>> [ 7501.201294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967260
>> [ 7501.210778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967259
>> [ 7508.208830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967260
>> [ 7508.209597] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967261
>> [ 7508.209670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967262
>> [ 7522.177756] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967263
>> [ 7522.177879] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967264
>> [ 7522.177881] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967265
>> [ 7550.776037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967264
>> [ 7550.785525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967263
>> [ 7550.795016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967262
>> [ 7550.804501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967261
>> [ 7550.813985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967260
>> [ 7550.823470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967259
>> [ 7556.140566] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967260
>> [ 7556.140598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967261
>> [ 7556.140739] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967262
>> [ 7556.140798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967263
>> [ 7556.140931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967264
>> [ 7556.141063] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967265
>> [ 7556.141111] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967266
>> [ 7556.141212] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967267
>> [ 7589.706135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967266
>> [ 7589.991286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967265
>> [ 7590.277340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967264
>> [ 7590.563347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967263
>> [ 7590.849389] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967262
>> [ 7591.135445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967261
>> [ 7591.421473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967260
>> [ 7591.707517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967259
>> [ 7606.172838] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204191720+8) 4294967260
>> [ 7703.615017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967263
>> [ 7703.624510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967262
>> [ 7703.634001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967261
>> [ 7703.643491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967260
>> [ 7703.652977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967259
>> [ 7708.933190] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967260
>> [ 7708.933333] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967261
>> [ 7708.933473] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967262
>> [ 7708.933618] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967263
>> [ 7708.933620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967264
>> [ 7708.933657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967265
>> [ 7708.933663] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967266
>> [ 7758.303406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967265
>> [ 7758.303407] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967264
>> [ 7758.303408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967263
>> [ 7758.303410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967262
>> [ 7758.303411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967261
>> [ 7758.303412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967260
>> [ 7758.303413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967259
>> [ 7778.439143] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967260
>> [ 7778.439197] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967261
>> [ 7778.439279] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967262
>> [ 7778.439376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967263
>> [ 7778.439409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967264
>> [ 7778.439494] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967265
>> [ 7858.899558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967264
>> [ 7858.899559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967263
>> [ 7858.899561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967262
>> [ 7858.899562] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967261
>> [ 7858.899563] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967260
>> [ 7858.899564] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967259
>> [ 7890.583124] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967260
>> [ 7890.583147] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967261
>> [ 7890.583594] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967262
>> [ 7890.583650] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967263
>> [ 7890.584141] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967264
>> [ 7890.584215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967265
>> [ 7890.584351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967266
>> [ 7952.730165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967265
>> [ 7952.739650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967264
>> [ 7952.749137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967263
>> [ 7952.758627] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967262
>> [ 7952.768110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967261
>> [ 7952.777595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967260
>> [ 7952.787077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967259
>> [ 7966.676635] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967260
>> [ 7966.676647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967261
>> [ 7966.676670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967263
>> [ 7966.676658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967262
>> [ 7966.676686] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967264
>> [ 7966.677634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967265
>> [ 7966.708979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967266
>> [ 8032.243861] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967265
>> [ 8032.253352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967264
>> [ 8032.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967263
>> [ 8032.272336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967262
>> [ 8032.281826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967261
>> [ 8032.291317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967260
>> [ 8032.300800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967259
>> [ 8043.901514] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967260
>> [ 8043.901516] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967261
>> [ 8043.901563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967262
>> [ 8043.901612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967264
>> [ 8043.901609] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967263
>> [ 8043.907672] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967265
>> [ 8146.468217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967264
>> [ 8146.477717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967263
>> [ 8146.487204] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967262
>> [ 8146.496692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967261
>> [ 8146.506180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967260
>> [ 8146.515671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967259
>> [ 8151.492003] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967260
>> [ 8151.492121] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967261
>> [ 8151.492457] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967262
>> [ 8151.492601] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967264
>> [ 8151.492590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967263
>> [ 8151.492750] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967265
>> [ 8164.821795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967266
>> [ 8200.519377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967260
>> [ 8200.519505] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967261
>> [ 8200.519782] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967262
>> [ 8200.519805] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967263
>> [ 8200.520020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967264
>> [ 8200.520247] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967265
>> [ 8200.520434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967266
>> [ 8200.520558] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967267
>> [ 8231.475052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967266
>> [ 8231.637009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967265
>> [ 8231.799001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967264
>> [ 8231.960972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967263
>> [ 8232.122948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967262
>> [ 8232.284905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967261
>> [ 8232.446896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967260
>> [ 8232.608893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967259
>> [ 8251.913368] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967260
>> [ 8251.913382] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967263
>> [ 8251.913377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967261
>> [ 8251.913388] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967265
>> [ 8251.913387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967264
>> [ 8251.913379] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967262
>> [ 8302.630262] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967261
>> [ 8302.630318] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967263
>> [ 8302.630275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967262
>> [ 8302.630250] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967260
>> [ 8302.630745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967264
>> [ 8377.488095] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967263
>> [ 8377.497581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967262
>> [ 8377.507068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967261
>> [ 8377.516554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967260
>> [ 8377.526049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967259
>> [ 8382.868830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967260
>> [ 8382.868911] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967261
>> [ 8382.869109] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967262
>> [ 8382.869323] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967263
>> [ 8382.983582] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967264
>> [ 8407.895450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967263
>> [ 8407.895452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967262
>> [ 8407.895455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967261
>> [ 8407.895457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967260
>> [ 8407.895459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967259
>> [ 8454.553018] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967264
>> [ 8454.570223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967263
>> [ 8454.587423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967262
>> [ 8454.587426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967261
>> [ 8454.604638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967260
>> [ 8454.621845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967259
>> [ 8498.349228] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967261
>> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967262
>> [ 8498.349217] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967260
>> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967263
>> [ 8498.349317] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967264
>> [ 8498.349366] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967265
>> [ 8517.904808] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967266
>> [ 8551.340824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967265
>> [ 8551.350319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967264
>> [ 8551.359809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967263
>> [ 8551.369300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967262
>> [ 8551.378788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967261
>> [ 8551.388272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967260
>> [ 8551.397756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967259
>> [ 8599.114364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967265
>> [ 8599.114366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967264
>> [ 8599.114367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967263
>> [ 8599.114368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967262
>> [ 8599.114370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967261
>> [ 8599.114371] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967260
>> [ 8599.114372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967259
>> [ 8599.117759] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.906310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
>> [ 8623.909333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.909335] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967259
>> [ 8623.909624] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.910846] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
>> [ 8623.913364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967262
>> [ 8625.066563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967263
>> [ 8651.338552] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204280104+8) 4294967267
>> [ 8693.930170] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967266
>> [ 8693.939647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967265
>> [ 8693.949139] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967264
>> [ 8693.958637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967263
>> [ 8693.968135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967262
>> [ 8693.977622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967261
>> [ 8693.987106] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967260
>> [ 8693.996589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967259
>> [ 8703.470915] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967260
>> [ 8703.470931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967261
>> [ 8703.470985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967264
>> [ 8703.470977] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967263
>> [ 8703.470957] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967262
>> [ 8703.471000] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967265
>> [ 8703.471037] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967266
>> [ 8771.557361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967265
>> [ 8771.566858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967264
>> [ 8771.576344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967263
>> [ 8771.585830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967262
>> [ 8771.595316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967261
>> [ 8771.604797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967260
>> [ 8771.614273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967259
>> [ 8772.929338] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967260
>> [ 8772.929454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967261
>> [ 8772.929545] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967262
>> [ 8772.929764] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967263
>> [ 8772.929816] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967264
>> [ 8772.929850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967265
>> [ 8847.837534] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967263
>> [ 8847.837548] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967264
>> [ 8847.843801] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967265
>> [ 8945.958057] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967260
>> [ 8945.958072] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967261
>> [ 8945.958101] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967262
>> [ 8945.958105] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967263
>> [ 8945.958112] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967264
>> [ 8945.958137] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967265
>> [ 8991.941073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967264
>> [ 8991.941075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967263
>> [ 8991.941076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967262
>> [ 8991.941077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967261
>> [ 8991.941078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967260
>> [ 8991.941080] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967259
>> [ 9036.005328] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967260
>> [ 9036.005409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967261
>> [ 9036.006275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967262
>> [ 9036.006348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967263
>> [ 9036.006436] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967264
>> [ 9036.006517] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967265
>> [ 9089.090686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967264
>> [ 9089.100181] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967263
>> [ 9089.109670] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967262
>> [ 9089.119157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967261
>> [ 9089.128642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967260
>> [ 9089.138130] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967259
>> [ 9120.298531] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967266
>> [ 9120.298540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967265
>> [ 9120.298547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967264
>> [ 9120.298555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967263
>> [ 9120.298568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967262
>> [ 9120.298581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967261
>> [ 9120.298599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967260
>> [ 9120.298613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967259
>> [ 9120.440293] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967260
>> [ 9120.440348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967261
>> [ 9120.440387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967262
>> [ 9120.440528] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967263
>> [ 9120.440553] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967264
>> [ 9120.440625] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967265
>> [ 9157.076832] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967266
>> [ 9204.360610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967265
>> [ 9204.370096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967264
>> [ 9204.379585] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967263
>> [ 9204.389070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967262
>> [ 9204.398557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967261
>> [ 9204.408047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967260
>> [ 9204.417534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967259
>> [ 9235.036854] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967260
>> [ 9235.036941] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967261
>> [ 9235.036985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967262
>> [ 9235.037076] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967263
>> [ 9235.037103] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967264
>> [ 9235.037202] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967265
>> [ 9326.601083] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967260
>> [ 9326.601219] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967261
>> [ 9326.601490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967262
>> [ 9326.601519] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967263
>> [ 9326.601556] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967264
>> [ 9326.601644] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967265
>> [ 9326.601736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967266
>> [ 9326.607310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967267
>> [ 9358.046046] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967266
>> [ 9358.055533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967265
>> [ 9358.065019] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967264
>> [ 9358.074517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967263
>> [ 9358.084002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967262
>> [ 9358.093495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967261
>> [ 9358.102997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967260
>> [ 9358.112485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967259
>> [ 9361.800927] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967260
>> [ 9361.801104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967261
>> [ 9361.801282] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967262
>> [ 9361.801484] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967263
>> [ 9361.801527] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967264
>> [ 9361.801620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967265
>> [ 9361.801653] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967266
>> [ 9430.438322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967265
>> [ 9430.447809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967264
>> [ 9430.457294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967263
>> [ 9430.466782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967262
>> [ 9430.476277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967261
>> [ 9430.485766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967260
>> [ 9430.495250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967259
>> [ 9444.781947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967260
>> [ 9444.781963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967261
>> [ 9444.782418] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967262
>> [ 9444.782605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967263
>> [ 9444.782662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967264
>> [ 9444.782718] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967265
>> [ 9444.782857] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967266
>> [ 9444.782963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967267
>> [ 9460.949714] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967266
>> [ 9461.002252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967265
>> [ 9461.054806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967264
>> [ 9461.107370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967263
>> [ 9461.159913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967262
>> [ 9461.212473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967261
>> [ 9461.265014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967260
>> [ 9461.317558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967259
>> [ 9472.880989] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967260
>> [ 9472.881022] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967262
>> [ 9472.881013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967261
>> [ 9472.881466] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967263
>> [ 9472.881585] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967264
>> [ 9472.881617] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967265
>> [ 9472.881636] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967266
>> [ 9473.230016] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967267
>> [ 9484.525992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967266
>> [ 9484.607866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967265
>> [ 9484.690643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967264
>> [ 9484.773393] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967263
>> [ 9484.856144] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967262
>> [ 9484.938897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967261
>> [ 9485.021665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967260
>> [ 9485.104419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967259
>> [ 9565.878800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967265
>> [ 9565.888283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967264
>> [ 9565.897767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967263
>> [ 9565.907250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967262
>> [ 9565.916734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967261
>> [ 9565.926219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967260
>> [ 9565.935703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967259
>> [ 9569.109038] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032745832+8) 4294967260
>> [ 9613.943060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032745832+8) 4294967259
>> [ 9625.083909] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967264
>> [ 9625.083911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967263
>> [ 9625.083913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967262
>> [ 9625.083914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967261
>> [ 9625.083916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967260
>> [ 9625.083917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967259
>> [ 9686.963155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8599505896+8) 4294967259
>> [ 9706.444605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967260
>> [ 9706.444608] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967261
>> [ 9706.444612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967262
>> [ 9706.444615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967263
>> [ 9706.444671] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967264
>> [ 9706.462013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967265
>> [ 9709.460551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967266
>> [ 9713.748452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967266
>> [ 9713.748682] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967267
>> [ 9713.753732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967266
>> [ 9713.753735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967265
>> [ 9713.753736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967264
>> [ 9713.753737] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967263
>> [ 9713.753739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967262
>> [ 9713.753740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967261
>> [ 9713.753741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967260
>> [ 9713.753743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967259
>> [ 9742.956450] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967260
>> [ 9742.956502] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967261
>> [ 9742.956503] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967262
>> [ 9742.956551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967263
>> [ 9743.152042] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967264
>> [ 9756.828598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967265
>> [ 9811.500567] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967264
>> [ 9811.510052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967263
>> [ 9811.519534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967262
>> [ 9811.529017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967261
>> [ 9811.538503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967260
>> [ 9811.547984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967259
>> [ 9816.207521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967266
>> [ 9816.207576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967265
>> [ 9816.207628] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967264
>> [ 9816.207682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967263
>> [ 9816.207747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967262
>> [ 9816.207801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967261
>> [ 9816.207859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967260
>> [ 9816.207914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967259
>> [ 9816.211655] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967260
>> [ 9816.211658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967261
>> [ 9816.211787] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967262
>> [ 9816.211882] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967263
>> [ 9816.211901] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967264
>> [ 9816.211947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967265
>> [ 9919.228920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967264
>> [ 9919.238395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967263
>> [ 9919.247877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967262
>> [ 9919.257360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967261
>> [ 9919.266845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967260
>> [ 9919.276332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967259
>> [ 9921.581043] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967261
>> [ 9921.580965] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967260
>> [ 9921.581099] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967262
>> [ 9921.581185] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967263
>> [ 9921.581304] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967264
>> [ 9921.581341] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967265
>> [ 9921.581364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967266
>> [ 9921.581365] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967267
>> [ 9921.583872] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967266
>> [ 9921.583882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967265
>> [ 9921.583897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967264
>> [ 9921.583913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967263
>> [ 9921.583929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967262
>> [ 9921.583945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967261
>> [ 9921.583960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967260
>> [ 9921.583977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967259
>> [ 9951.448268] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314088+8) 4294967260
>> [ 9986.682431] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967262
>> [ 9986.682540] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967263
>> [ 9986.682661] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967264
>> [ 9986.687019] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967265
>> [10026.057977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967264
>> [10026.057980] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967263
>> [10026.057982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967262
>> [10026.057984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967261
>> [10026.057986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967260
>> [10026.057987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967259
>> [10026.060967] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967260
>> [10026.061269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967261
>> [10026.061642] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967262
>> [10026.061728] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967264
>> [10026.061715] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967263
>> [10026.061745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967265
>> [10026.061790] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967266
>> [10026.061809] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967267
>> [10055.918459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967266
>> [10056.153717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967265
>> [10056.388985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967264
>> [10056.624244] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967263
>> [10056.859507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967262
>> [10057.094768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967261
>> [10057.330043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967260
>> [10057.565339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967259
>> [10060.361434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967260
>> [10060.361487] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967261
>> [10060.361634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967262
>> [10060.361709] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967263
>> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967264
>> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967265
>> [10060.361723] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967266
>> [10108.302805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967265
>> [10108.302808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967264
>> [10108.302810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967263
>> [10108.302812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967262
>> [10108.302814] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967261
>> [10108.302816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967260
>> [10108.302818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967259
>> [10108.307331] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967260
>> [10108.307515] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967261
>> [10108.307665] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967263
>> [10108.307647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967262
>> [10108.308273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967264
>> [10108.308454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967266
>> [10108.308426] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967265
>> [10108.308720] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967267
>> [10138.300402] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967266
>> [10138.516650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967265
>> [10138.732928] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967264
>> [10138.949166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967263
>> [10139.165431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967262
>> [10139.381678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967261
>> [10139.597964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967260
>> [10139.814237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967259
>> [10144.088915] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746728+8) 4294967260
>> [10188.320954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746728+8) 4294967259
>> [10196.658979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967260
>> [10196.659207] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967261
>> [10196.659343] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967262
>> [10196.659430] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967263
>> [10196.659439] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967264
>> [10196.660163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967265
>> [10196.660183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967266
>> [10279.470741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967265
>> [10279.480231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967264
>> [10279.489723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967263
>> [10279.499214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967262
>> [10279.508709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967261
>> [10279.518206] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967260
>> [10279.527693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967259
>> [10286.896922] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746984+8) 4294967260
>> [10304.464308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746984+8) 4294967259
>> [10305.667186] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967260
>> [10305.667269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967261
>> [10305.667270] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967262
>> [10305.667429] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967263
>> [10305.667512] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967264
>> [10305.667590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967265
>> [10305.667639] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967266
>> [10334.751820] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
>> [10415.904902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
>> [10415.960311] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
>> [10423.280595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
>> [10439.777461] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967263
>> [10439.777592] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967264
>> [10439.777647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967265
>> [10439.777786] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967266
>> [10497.698903] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967265
>> [10497.698905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967264
>> [10497.698907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967263
>> [10497.698908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967262
>> [10497.698910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967261
>> [10497.698911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967260
>> [10497.698912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967259
>> [10497.701113] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967260
>> [10497.701118] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967261
>> [10497.701183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967262
>> [10497.701490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967263
>> [10497.701908] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967264
>> [10497.702132] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967265
>> [10593.723273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967264
>> [10593.723280] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967265
>> [10593.723381] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967266
>> [10681.179411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967265
>> [10681.188893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967264
>> [10681.198368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967263
>> [10681.207851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967262
>> [10681.217350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967261
>> [10681.226842] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967260
>> [10681.236340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967259
>> [10689.151359] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967260
>> [10689.151633] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967261
>> [10689.151649] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967262
>> [10689.151700] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967263
>> [10689.151823] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967264
>> [10689.152267] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967265
>> [10689.152370] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967266
>> [10765.465443] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967260
>> [10765.465539] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967261
>> [10765.465942] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967262
>> [10765.466073] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967263
>> [10765.471339] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967264
>> [10765.471352] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967265
>> [10765.471615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967266
>> [10810.806942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967265
>> [10810.816430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967264
>> [10810.825920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967263
>> [10810.835410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967262
>> [10810.844892] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967261
>> [10810.854379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967260
>> [10810.863866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967259
>> [10829.079850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741313832+8) 4294967260
>> [10849.794350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
>> [10918.134642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967264
>> [10918.144122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967263
>> [10918.153605] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967262
>> [10918.163087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967261
>> [10918.172569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967260
>> [10918.182053] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
>> [10925.672507] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967260
>> [10925.672673] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967262
>> [10925.672533] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967261
>> [10925.672788] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967263
>> [10925.672958] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967264
>> [10925.672978] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967265
>> [10925.673524] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967266
>> [10925.673721] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967267
>> [10938.064956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967266
>> [10938.263167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967265
>> [10938.461362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967264
>> [10938.659537] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967263
>> [10938.857718] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967262
>> [10939.055907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967261
>> [10939.254110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967260
>> [10939.452289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967259
>> [10942.625740] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967260
>> [10942.625791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967262
>> [10942.625850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967264
>> [10942.625849] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967263
>> [10942.625789] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967261
>> [10942.626403] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967265
>> [11020.726643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967264
>> [11020.726645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967263
>> [11020.726646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967262
>> [11020.726648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967261
>> [11020.726649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967260
>> [11020.726651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967259
>> [11045.762697] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967262
>> [11045.763802] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967263
>> [11045.763966] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967264
>> [11045.764660] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967265
>> [11072.427987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967266
>> [11072.429693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967265
>> [11072.429971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967264
>> [11072.430020] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967263
>> [11072.430076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967262
>> [11072.430127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967261
>> [11072.430180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967260
>> [11072.430234] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967259
>> [11103.176662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967260
>> [11103.176760] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967261
>> [11103.176914] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967262
>> [11103.176947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967263
>> [11103.177351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967264
>> [11103.243210] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967265
>> [11197.368568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967264
>> [11197.378067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967263
>> [11197.387571] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967262
>> [11197.397061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967261
>> [11197.406551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967260
>> [11197.416027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967259
>> [11213.302960] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741352808+8) 4294967260
>> [11214.005987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967265
>> [11214.005989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967264
>> [11214.005990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967263
>> [11214.005991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967262
>> [11214.005992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967261
>> [11214.005994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967260
>> [11214.005995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967259
>> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967260
>> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967261
>> [11251.017233] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967263
>> [11251.017238] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967266
>> [11251.017236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967265
>> [11251.017234] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967264
>> [11251.017231] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967262
>> [11269.980634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967267
>> [11289.588526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967266
>> [11289.598008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967265
>> [11289.607492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967264
>> [11289.616977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967263
>> [11289.626462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967262
>> [11289.635951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967261
>> [11289.645439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967260
>> [11289.654929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967259
>> [11289.955406] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967260
>> [11289.955748] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967261
>> [11289.955828] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967262
>> [11289.956014] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967263
>> [11310.687017] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967264
>> [11344.753308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967266
>> [11344.753344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967265
>> [11344.753381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967264
>> [11344.753417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967263
>> [11344.753453] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967262
>> [11344.753488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967261
>> [11344.753524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967260
>> [11344.753559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967259
>> [11372.057310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967260
>> [11372.057453] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967261
>> [11372.058126] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967262
>> [11372.058225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967263
>> [11372.058236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967264
>> [11372.058445] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967265
>> [11477.692568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967264
>> [11477.702057] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967263
>> [11477.711549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967262
>> [11477.721041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967261
>> [11477.730527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967260
>> [11477.740015] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967259
>> [11484.738964] __add_stripe_bio: md127: start ff2721beec8c2fa0(28725142504+8) 4294967260
>> [11516.590602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28725142504+8) 4294967259
>> [11541.514580] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967260
>> [11541.514657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967261
>> [11541.514736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967262
>> [11541.514795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967263
>> [11541.514937] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967264
>> [11541.514959] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967265
>> [11541.515020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967266
>> [11541.515178] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967267
>> [11589.245255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967266
>> [11589.530527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967265
>> [11589.815730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967264
>> [11590.100882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967263
>> [11590.386071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967262
>> [11590.671228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967261
>> [11590.956368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967260
>> [11591.241504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967259
>> [11665.205805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967264
>> [11665.215300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967263
>> [11665.224797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967262
>> [11665.234283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967261
>> [11665.243770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967260
>> [11665.253265] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967259
>> [11676.491376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967260
>> [11676.491588] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967261
>> [11676.491637] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967262
>> [11676.491798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967263
>> [11676.492010] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967264
>> [11785.703215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967260
>> [11787.558791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967261
>> [11787.558892] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967262
>> [11791.614199] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967263
>> [11793.388452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967264
>> [11795.421600] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967265
>> [11795.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967266
>> [11795.424300] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967267
>> [11795.426502] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967266
>> [11795.426503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967265
>> [11795.426504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967264
>> [11795.426505] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967263
>> [11795.426506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967262
>> [11795.426507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967261
>> [11795.426509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967260
>> [11795.426510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967259
>> [11871.476819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967260
>> [11871.486321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967259
>> [11919.615037] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967260
>> [11919.615114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967261
>> [11919.615374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967262
>> [11919.615395] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967263
>> [11919.615491] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967264
>> [11928.774840] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967267
>> [12000.001506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967264
>> [12000.001507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967263
>> [12000.001509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967262
>> [12000.001510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967261
>> [12000.001512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967260
>> [12000.001514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967259
>> [12033.086368] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967260
>> [12033.086440] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967261
>> [12033.086702] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967262
>> [12033.086790] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967263
>> [12033.087023] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967264
>> [12071.801701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967262
>> [12071.801733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967264
>> [12071.801727] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967263
>> [12071.801859] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967265
>> [12071.801934] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967266
>> [12147.838099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967265
>> [12147.838104] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967264
>> [12147.838108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967263
>> [12147.838113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967262
>> [12147.838117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967261
>> [12147.838122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967260
>> [12147.838131] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967259
>> [12161.825218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967260
>> [12171.278213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967261
>> [12171.278308] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967262
>> [12171.278349] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967263
>> [12171.278418] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967264
>> [12171.278481] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967265
>> [12225.694791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967264
>> [12225.704279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967263
>> [12225.713774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967262
>> [12225.723264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967261
>> [12225.732759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967260
>> [12225.742248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967259
>> [12241.217982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967260
>> [12241.218022] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967261
>> [12241.218156] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967262
>> [12241.218291] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967263
>> [12241.218712] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967264
>> [12241.225590] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967265
>> [12324.241931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967264
>> [12324.251421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967263
>> [12324.260905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967262
>> [12324.270390] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967261
>> [12324.279874] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967260
>> [12324.289362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967259
>> [12330.627283] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967261
>> [12330.627282] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967260
>> [12330.627356] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967262
>> [12330.627452] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967263
>> [12330.627459] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967264
>> [12330.627476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967265
>> [12360.643250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967265
>> [12360.643251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967264
>> [12360.643253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967263
>> [12360.643254] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967262
>> [12360.643256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967261
>> [12360.643257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967260
>> [12360.643258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967259
>> [12412.055085] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967262
>> [12412.055185] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967263
>> [12412.055346] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967264
>> [12412.055358] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967265
>> [12502.916463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967264
>> [12502.925955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967263
>> [12502.935437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967262
>> [12502.944921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967261
>> [12502.954403] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967260
>> [12502.963891] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967259
>> [12508.172163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28994103208+8) 4294967260
>> [12569.394168] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28994103208+8) 4294967259
>> [12669.806358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967264
>> [12669.815852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967263
>> [12669.825349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967262
>> [12669.834848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967261
>> [12669.844337] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967260
>> [12669.853824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967259
>> [12681.132403] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967260
>> [12681.132623] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967261
>> [12681.132903] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967262
>> [12681.133118] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967263
>> [12681.133360] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967264
>> [12681.133472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967265
>> [12681.133687] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967266
>> [12756.131024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967265
>> [12756.140520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967264
>> [12756.150010] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967263
>> [12756.159496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967262
>> [12756.168984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967261
>> [12756.178470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967260
>> [12756.187958] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967259
>> [12761.752380] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967260
>> [12761.752555] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967261
>> [12761.752566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967262
>> [12761.752717] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967263
>> [12761.752864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967264
>> [12761.753575] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967265
>> [12841.108192] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967264
>> [12841.117686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967263
>> [12841.127177] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967262
>> [12841.136667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967261
>> [12841.146150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967260
>> [12841.155642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967259
>> [12854.788520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967262
>> [12854.789006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967263
>> [12854.790480] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967264
>> [12854.792345] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967265
>> [12854.792371] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967266
>> [12854.792648] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967267
>> [12854.796137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967266
>> [12854.796140] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967265
>> [12854.796143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967264
>> [12854.796145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967263
>> [12854.796147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967262
>> [12854.796149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967261
>> [12854.796151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967260
>> [12854.796152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967259
>> [12979.496382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967265
>> [12979.505867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967264
>> [12979.515357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967263
>> [12979.524845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967262
>> [12979.534338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967261
>> [12979.543825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967260
>> [12979.553315] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967259
>> [12987.839356] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032747304+8) 4294967260
>> [13019.716541] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032747304+8) 4294967259
>> [13023.790667] __add_stripe_bio: md127: start ff2721beec8c2fa0(8053065000+8) 4294967260
>> [13166.159630] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8053065000+8) 4294967259
>> [13172.153701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967260
>> [13172.153856] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967261
>> [13172.154183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967262
>> [13172.154307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967263
>> [13172.154320] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967264
>> [13172.154325] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967265
>> [13172.154327] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967266
>> [13172.154540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967267
>> [13184.154929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967266
>> [13184.395309] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967265
>> [13184.635656] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967264
>> [13184.876002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967263
>> [13185.116361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967262
>> [13185.356722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967261
>> [13185.597099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967260
>> [13185.837445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967259
>> [13200.462739] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967260
>> [13200.463373] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967261
>> [13200.463433] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967262
>> [13200.463686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967263
>> [13200.463718] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967264
>> [13200.463750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967265
>> [13200.463801] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967266
>> [13200.463824] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967267
>> [13218.832166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967266
>> [13218.964853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967265
>> [13219.097514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967264
>> [13219.230197] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967263
>> [13219.362875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967262
>> [13219.495536] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967261
>> [13219.628221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967260
>> [13219.760852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967259
>> [13284.342646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967264
>> [13284.352133] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967263
>> [13284.361618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967262
>> [13284.371115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967261
>> [13284.380606] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967260
>> [13284.390092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967259
>> [13290.205839] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312350184+8) 4294967260
>> [13328.994695] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312350184+8) 4294967259
>> [13358.709396] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967261
>> [13358.709379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967260
>> [13358.709414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967262
>> [13358.709435] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967263
>> [13358.709475] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967264
>> [13362.737444] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967265
>> [13362.737666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967266
>> [13386.563843] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967260
>> [13386.563968] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967261
>> [13386.564092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967262
>> [13386.564227] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967263
>> [13386.564297] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967264
>> [13386.564364] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967265
>> [13386.564532] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967266
>> [13425.701678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967265
>> [13425.701680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967264
>> [13425.701681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967263
>> [13425.701682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967262
>> [13425.701684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967261
>> [13425.701686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967260
>> [13425.701688] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967259
>> [13491.009481] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967264
>> [13491.018974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967263
>> [13491.028463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967262
>> [13491.037945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967261
>> [13491.047430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967260
>> [13491.056918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967259
>> [13493.360976] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967260
>> [13493.361476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967261
>> [13493.361592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967262
>> [13493.366880] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967263
>> [13493.367183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967264
>> [13493.367399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967265
>> [13493.367635] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967266
>> [13493.367706] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967267
>> [13524.736988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967266
>> [13524.746476] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967265
>> [13524.755962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967264
>> [13524.765450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967263
>> [13524.774943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967262
>> [13524.784436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967261
>> [13524.793931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967260
>> [13524.803422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967259
>> [13529.444566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967260
>> [13529.444628] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967261
>> [13529.445249] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967262
>> [13529.445330] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967264
>> [13529.445307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967263
>> [13529.445578] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967265
>> [13529.445594] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967266
>> [13568.836756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967265
>> [13568.836757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967264
>> [13568.836759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967263
>> [13568.836766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967262
>> [13568.836768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967261
>> [13568.836769] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967260
>> [13568.836770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967259
>> [13595.486041] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967260
>> [13595.486114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967261
>> [13595.486450] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967262
>> [13595.486544] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967263
>> [13595.486756] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967264
>> [13595.486807] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967265
>> [13595.487127] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967266
>> [13684.444417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967265
>> [13684.453904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967264
>> [13684.463391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967263
>> [13684.472878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967262
>> [13684.482361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967261
>> [13684.491850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967260
>> [13684.501340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967259
>> [13686.643818] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967260
>> [13686.643853] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967261
>> [13686.643948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967262
>> [13686.643993] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967263
>> [13686.644144] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967264
>> [13686.644406] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967265
>> [13686.644525] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967266
>> [13734.793297] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967265
>> [13734.809137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967264
>> [13744.445102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967263
>> [13744.454593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967262
>> [13744.464083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967261
>> [13744.473560] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967260
>> [13744.483051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967259
>> [13820.332081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967264
>> [13820.341572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967263
>> [13820.351058] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967262
>> [13820.360550] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967261
>> [13820.370039] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967260
>> [13820.379525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967259
>> [13828.887574] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967260
>> [13828.888698] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967261
>> [13828.888811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967262
>> [13828.888838] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967263
>> [13828.888877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967264
>> [13828.889012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967265
>> [13828.889087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967266
>> [13939.263249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967265
>> [13939.272744] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967264
>> [13939.282233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967263
>> [13939.291721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967262
>> [13939.301203] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967261
>> [13939.310687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967260
>> [13939.320173] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967259
>> [13949.622362] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032748904+8) 4294967260
>> [13975.927477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324672360+8) 4294967267
>> [13975.932834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967266
>> [13975.932837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967265
>> [13975.932840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967264
>> [13975.932843] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967263
>> [13975.932845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967262
>> [13975.932848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967261
>> [13975.932851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967260
>> [13975.932853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967259
>> [14002.581980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967260
>> [14002.582096] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967261
>> [14002.582385] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967262
>> [14002.582558] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967263
>> [14002.582619] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967264
>> [14002.582667] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967265
>> [14119.130023] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967264
>> [14119.139513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967263
>> [14119.148996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967262
>> [14119.158486] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967261
>> [14119.167972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967260
>> [14119.177456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967259
>> [14124.072930] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967260
>> [14124.073027] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967262
>> [14124.073025] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967261
>> [14124.073126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967263
>> [14124.073129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967264
>> [14124.073379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967265
>> [14210.467642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967264
>> [14210.467644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967263
>> [14210.467645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967262
>> [14210.467647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967261
>> [14210.467650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967260
>> [14210.467652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967259
>> [14239.699953] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312349928+8) 4294967260
>> [14343.004904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312349928+8) 4294967259
>> [14351.607882] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967260
>> [14351.607891] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967262
>> [14351.607979] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967263
>> [14351.607884] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967261
>> [14351.608559] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967264
>> [14351.608811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967265
>> [14351.691948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967266
>> [14460.596492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967265
>> [14460.596493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967264
>> [14460.596494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967263
>> [14460.596495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967262
>> [14460.596496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967261
>> [14460.596497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967260
>> [14460.596499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967259
>> [14460.598694] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967260
>> [14460.598948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967261
>> [14460.599105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967262
>> [14460.599222] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967263
>> [14460.599261] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967264
>> [14460.599376] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967265
>> [14541.632593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967264
>> [14541.642093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967263
>> [14541.651581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967262
>> [14541.661069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967261
>> [14541.670554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967260
>> [14541.680042] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967259
>> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967260
>> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967261
>> [14547.547058] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967262
>> [14547.547167] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967263
>> [14547.547468] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967264
>> [14547.547552] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967265
>> [14547.547661] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967266
>> [14547.547697] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967267
>> [14575.315268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967266
>> [14575.596007] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967265
>> [14575.876716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967264
>> [14576.157450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967263
>> [14576.438196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967262
>> [14576.718943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967261
>> [14576.999682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967260
>> [14577.280400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967259
>> [14582.737304] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749480+8) 4294967260
>> [14636.539506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749480+8) 4294967259
>> [14638.880107] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749864+8) 4294967260
>> [14657.493830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749864+8) 4294967259
>> [14675.212921] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967260
>> [14675.213033] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967261
>> [14675.213091] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967262
>> [14675.213105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967263
>> [14675.213429] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967264
>> [14675.213477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967265
>> [14675.213877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967266
>> [14737.052888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967265
>> [14737.062372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967264
>> [14737.071858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967263
>> [14737.081341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967262
>> [14737.090827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967261
>> [14737.100311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967260
>> [14737.109810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967259
>> [14743.382147] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967260
>> [14743.382495] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967261
>> [14743.382520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967262
>> [14743.382595] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967263
>> [14743.382607] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967264
>> [14743.382646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967265
>> [14743.382683] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967266
>> [14836.115483] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967265
>> [14836.124971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967264
>> [14836.134457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967263
>> [14836.143943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967262
>> [14836.153427] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967261
>> [14836.162908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967260
>> [14836.172392] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967259
>> [14867.977410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967265
>> [14867.977411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967264
>> [14867.977413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967263
>> [14867.977414] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967262
>> [14867.977416] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967261
>> [14867.977418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967260
>> [14867.977419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967259
>> [14867.982046] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967260
>> [14867.982289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967261
>> [14867.982319] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967262
>> [14867.982377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967263
>> [14867.982398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967264
>> [14867.982409] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967265
>> [14906.532978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967264
>> [14906.532983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967263
>> [14906.532987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967262
>> [14906.532991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967261
>> [14906.532995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967260
>> [14906.533001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967259
>> [14906.537331] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967260
>> [14906.537374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967261
>> [14906.537389] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967262
>> [14906.537397] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967263
>> [14906.537413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967264
>> [14906.537417] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967265
>> [14906.538256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967266
>> [15000.553174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967265
>> [15000.562663] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967264
>> [15000.572145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967263
>> [15000.581619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967262
>> [15000.591109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967261
>> [15000.600591] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967260
>> [15000.610079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967259
>> [15038.840534] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967265
>> [15038.840569] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967266
>> [15038.918437] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967267
>> [15072.110477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967266
>> [15072.119963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967265
>> [15072.129445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967264
>> [15072.138931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967263
>> [15072.148422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967262
>> [15072.157919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967261
>> [15072.167410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967260
>> [15072.176895] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967259
>> [15094.337077] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967260
>> [15094.337106] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967262
>> [15094.337126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967264
>> [15094.337145] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967265
>> [15094.337125] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967263
>> [15094.337095] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967261
>> [15094.337202] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967266
>> [15094.337864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967267
>> [15109.185642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967266
>> [15109.231325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967265
>> [15109.276996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967264
>> [15109.322684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967263
>> [15109.368357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967262
>> [15109.414059] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967261
>> [15109.459734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967260
>> [15109.505426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967259
>> [15109.940853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967266
>> [15109.940854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967265
>> [15109.940856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967264
>> [15109.940857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967263
>> [15109.940858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967262
>> [15109.940860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967261
>> [15109.940862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967260
>> [15109.940863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967259
>> [15109.943869] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032750824+8) 4294967260
>> [15109.946364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032750824+8) 4294967259
>> [15143.938195] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967260
>> [15143.938199] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967261
>> [15143.938502] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967262
>> [15143.938860] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967263
>> [15143.939079] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967265
>> [15143.939055] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967264
>> [15194.321503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967264
>> [15194.330986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967263
>> [15194.340472] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967262
>> [15194.349954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967261
>> [15194.359431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967260
>> [15194.368920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967259
>> [15202.981864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967260
>> [15202.982006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967262
>> [15202.982049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967263
>> [15202.981938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967261
>> [15202.982750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967264
>> [15284.694932] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967263
>> [15284.704421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967262
>> [15284.713913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967261
>> [15284.723400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967260
>> [15284.732888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967259
>> [15293.005593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967264
>> [15293.005596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967263
>> [15293.005597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967262
>> [15293.005599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967261
>> [15293.005602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967260
>> [15293.005603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967259
>> [15293.008211] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032751272+8) 4294967260
>> [15293.009308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032751272+8) 4294967259
>> [15361.123918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967265
>> [15361.133405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967264
>> [15361.142894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967263
>> [15361.152384] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967262
>> [15361.161896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967261
>> [15361.171380] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967260
>> [15361.180864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967259
>> [15364.021538] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967260
>> [15364.021666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967261
>> [15364.022045] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967262
>> [15364.022114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967264
>> [15364.022092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967263
>> [15364.022137] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967265
>> [15364.022367] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967266
>> [15364.022522] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967267
>> [15374.355722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967266
>> [15374.554797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967265
>> [15374.753840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967264
>> [15374.952890] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967263
>> [15375.151938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967262
>> [15375.351041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967261
>> [15375.550089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967260
>> [15375.749178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967259
>> [15475.090773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967263
>> [15475.090778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967262
>> [15475.090782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967261
>> [15475.090790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967260
>> [15475.090795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967259
>> [15512.462087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530332008+8) 4294967260
>> [15605.182723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29530332008+8) 4294967259
>> [15610.777819] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967260
>> [15610.778112] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967261
>> [15610.778158] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967262
>> [15610.778540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967263
>> [15610.778741] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967264
>> [15610.778768] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967265
>> [15610.779126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967266
>> [15610.779136] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967267
>> [15625.092720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967266
>> [15625.291753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967265
>> [15625.490794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967264
>> [15625.689869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967263
>> [15625.888922] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967262
>> [15626.087961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967261
>> [15626.287002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967260
>> [15626.486060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967259
>> [15631.421893] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967260
>> [15631.422458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967261
>> [15631.422563] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967262
>> [15631.422804] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967263
>> [15631.422822] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967264
>> [15631.422885] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967265
>> [15649.575279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967266
>> [15684.592107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967265
>> [15684.601597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967264
>> [15684.611088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967263
>> [15684.620575] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967262
>> [15684.630060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967261
>> [15684.639545] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967260
>> [15684.649027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967259
>> [15690.850975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967260
>> [15690.851550] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967261
>> [15690.851939] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967262
>> [15690.852053] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967263
>> [15690.852178] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967264
>> [15690.852281] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967265
>> [15690.852313] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967266
>> [15735.620596] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967260
>> [15735.620646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967261
>> [15735.620685] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967263
>> [15735.620686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967264
>> [15735.620652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967262
>> [15735.621500] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967265
>> [15816.149770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967264
>> [15816.149772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967263
>> [15816.149774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967262
>> [15816.149776] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967261
>> [15816.149778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967260
>> [15816.149780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967259
>> [15844.779214] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967260
>> [15844.779218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967261
>> [15934.023215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967265
>> [15934.032699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967264
>> [15934.042184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967263
>> [15934.051668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967262
>> [15934.061153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967261
>> [15934.070639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967260
>> [15934.080128] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967259
>> [15935.505198] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967260
>> [15935.505344] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967261
>> [15935.505684] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967262
>> [15951.816975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967263
>> [15951.817541] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967264
>> [15951.817733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967265
>> [16048.370516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967264
>> [16048.370517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967263
>> [16048.370518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967262
>> [16048.370520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967261
>> [16048.370521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967260
>> [16048.370522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967259
>> [16048.372719] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967260
>> [16048.373083] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967261
>> [16048.373765] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967262
>> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967263
>> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967264
>> [16048.373936] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967265
>> [16048.373938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967266
>> [16048.373966] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967267
>> [16099.152109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967266
>> [16099.438210] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967265
>> [16099.724249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967264
>> [16100.011268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967263
>> [16100.298239] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967262
>> [16100.585188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967261
>> [16100.872155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967260
>> [16101.159163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967259
>> [16105.965912] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967260
>> [16105.966207] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967261
>> [16105.966279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967262
>> [16105.966399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967263
>> [16105.966643] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967264
>> [16105.966725] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967265
>> [16163.037754] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967260
>> [16163.037836] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967261
>> [16163.037883] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967262
>> [16163.038018] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967263
>> [16163.038537] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967264
>> [16163.039472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967265
>> [16163.263169] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967266
>> [16264.133495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967265
>> [16264.142981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967264
>> [16264.152464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967263
>> [16264.161951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967262
>> [16264.171441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967261
>> [16264.180935] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967260
>> [16264.190424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967259
>> [16266.992432] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967260
>> [16266.992711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967261
>> [16266.993213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967262
>> [16266.993398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967263
>> [16266.993458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967264
>> [16290.695547] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967265
>> [16290.695556] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967266
>> [16379.808266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967265
>> [16379.817751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967264
>> [16379.827236] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967263
>> [16379.836720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967262
>> [16379.846205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967261
>> [16379.855697] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967260
>> [16379.865187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967259
>> [16387.725516] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967260
>> [16387.725798] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967261
>> [16387.725917] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967262
>> [16387.726352] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967263
>> [16387.726414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967264
>> [16387.726707] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967265
>> [16408.022981] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967266
>> [16505.753964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967265
>> [16505.763456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967264
>> [16505.772952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967263
>> [16505.782444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967262
>> [16505.791921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967261
>> [16505.801405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967260
>> [16505.810889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967259
>> [16515.475620] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967261
>> [16515.475613] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967260
>> [16515.475711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967262
>> [16515.475844] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967263
>> [16515.475958] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967264
>> [16515.476377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967265
>> [16515.476744] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967266
>> [16534.160611] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967267
>> [16554.448056] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967266
>> [16554.457546] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967265
>> [16554.467022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967264
>> [16554.476504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967263
>> [16554.485979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967262
>> [16554.495463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967261
>> [16554.504953] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967260
>> [16554.514442] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967259
>> [16555.835592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967260
>> [16555.835847] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967261
>> [16555.836140] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967262
>> [16555.836279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967263
>> [16576.074289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967264
>> [16576.074652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967265
>> [16576.075049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967266
>> [16702.114836] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967265
>> [16702.124323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967264
>> [16702.133808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967263
>> [16702.143300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967262
>> [16702.152799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967261
>> [16702.162289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967260
>> [16702.171773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967259
>> [16710.388044] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967260
>> [16710.388157] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967261
>> [16710.388256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967262
>> [16710.388347] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967264
>> [16710.388390] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967265
>> [16710.388410] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967266
>> [16710.388285] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967263
>> [16710.389466] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967267
>> [16726.681690] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967266
>> [16732.076989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967265
>> [16732.227720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967264
>> [16732.378463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967263
>> [16732.529207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967262
>> [16732.679930] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967261
>> [16732.830681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967260
>> [16732.981424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967259
>> [16739.691982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967261
>> [16739.691980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967260
>> [16739.692049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967262
>> [16739.692753] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967263
>> [16739.693143] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967264
>> [16739.693286] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967265
>> [16739.693391] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967266
>> [16796.194339] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967260
>> [16796.194398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967261
>> [16796.194422] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967262
>> [16796.194483] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967263
>> [16796.194946] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967264
>> [16796.195038] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967265
>> [16796.195499] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967266
>> [16870.648462] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753896+8) 4294967260
>> [16898.893981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753896+8) 4294967259
>> [16967.311945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967265
>> [16967.321437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967264
>> [16967.330924] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967263
>> [16967.340412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967262
>> [16967.349896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967261
>> [16967.359379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967260
>> [16967.368867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967259
>> [16976.071394] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967260
>> [16976.071413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967261
>> [16976.071469] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967262
>> [16976.071568] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967263
>> [16976.071677] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967264
>> [16976.071732] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967266
>> [16976.071700] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967265
>> [16976.072068] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967267
>> [16990.311360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967266
>> [16990.481108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967265
>> [16990.650832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967264
>> [16990.820572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967263
>> [16990.990305] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967262
>> [16991.160043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967261
>> [16991.329786] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967260
>> [16991.499551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967259
>> [16996.987839] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754024+8) 4294967260
>> [17047.151615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754024+8) 4294967259
>> [17115.879905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967264
>> [17115.889391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967263
>> [17115.898868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967262
>> [17115.908352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967261
>> [17115.917837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967260
>> [17115.927320] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967259
>> [17116.400129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967260
>> [17116.400218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967261
>> [17116.400665] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967262
>> [17116.400972] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967263
>> [17116.401012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967264
>> [17116.401073] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967265
>> [17116.401094] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967266
>> [17173.631876] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967265
>> [17173.631877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967264
>> [17173.631878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967263
>> [17173.631880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967262
>> [17173.631881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967261
>> [17173.631883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967260
>> [17173.631885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967259
>> [17201.424309] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753960+8) 4294967260
>> [17209.574673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753960+8) 4294967259
>> [17212.610080] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967260
>> [17212.610550] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967261
>> [17226.574613] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967262
>> [17226.574818] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967263
>> [17226.575156] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967264
>> [17226.575266] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967265
>> [17226.575729] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967266
>> [17296.342998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967265
>> [17296.352083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967264
>> [17296.361178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967263
>> [17296.370271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967262
>> [17296.379366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967261
>> [17296.388464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967260
>> [17296.397557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967259
>> [17297.711781] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967260
>> [17297.712071] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967261
>> [17311.049521] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967262
>> [17311.049595] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967263
>> [17391.178025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967262
>> [17391.187127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967261
>> [17391.196230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967260
>> [17391.205325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967259
>> [17397.357339] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967260
>> [17397.358064] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967261
>> [17397.358191] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967262
>> [17406.112100] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967263
>> [17460.063716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967262
>> [17460.072623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967261
>> [17460.081520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967260
>> [17460.090422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967259
>> [17464.959257] __add_stripe_bio: md127: start ff2721beec8c2fa0(75624+8) 4294967260
>> [17482.594510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(75624+8) 4294967259
>> [17508.091641] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967261
>> [17508.091640] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967260
>> [17508.091647] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967262
>> [17532.456256] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967260
>> [17532.456294] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967261
>> [17532.456309] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967262
>> [17572.776358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967261
>> [17572.785257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967260
>> [17572.794163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967259
>> [17594.427109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(12348140520+8) 4294967259
>> [17631.571482] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967262
>> [17633.896087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967261
>> [17633.904990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967260
>> [17633.913889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967259
>> [17640.670153] __add_stripe_bio: md127: start ff2721beec8c2fa0(42344+8) 4294967262
>> [17661.740739] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967264
>> [17661.740869] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967265
>> [17691.866848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967264
>> [17691.866850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967263
>> [17691.866851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967262
>> [17691.866853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967261
>> [17691.866854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967260
>> [17691.866855] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967259
>> [17711.055783] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967260
>> [17711.055850] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967262
>> [17711.055807] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967261
>> [17753.659012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967261
>> [17753.667920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967260
>> [17753.676822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967259
>> [17756.839874] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754472+8) 4294967260
>> [17761.904589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754472+8) 4294967259
>> [17764.608952] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967260
>> [17764.609156] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967262
>> [17764.609117] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967261
>> [17764.609992] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967263
>> [17785.372101] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967264
>> [17785.480370] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967265
>> [17831.956995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967264
>> [17831.965897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967263
>> [17831.974795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967262
>> [17831.983692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967261
>> [17831.992592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967260
>> [17832.001495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967259
>> [17834.344591] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755304+8) 4294967260
>> [17843.122828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032755304+8) 4294967259
>> [17845.582553] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967260
>> [17845.582666] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967261
>> [17845.583154] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967262
>> [17845.583190] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967264
>> [17845.583179] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967263
>> [17895.265867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967263
>> [17895.274772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967262
>> [17895.283673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967261
>> [17895.292578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967260
>> [17895.301470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967259
>> [17898.047030] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967260
>> [17898.048282] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967261
>> [17898.049252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967260
>> [17898.049253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967259
>> [17910.857571] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967260
>> [17910.857605] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967261
>> [17940.805214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967260
>> [17940.805216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967259
>> [17953.692889] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967261
>> [17953.692929] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
>> [17953.693143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
>> [17953.694264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
>> [18003.530258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
>> [18003.539162] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967260
>> [18003.548066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967259
>> [18009.481434] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26216+8) 4294967259
>> [18009.492293] __add_stripe_bio: md127: start ff2721beec8c2fa0(25704+8) 4294967260
>> [18048.069279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25704+8) 4294967259
>> [18048.552558] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967260
>> [18048.552796] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967261
>> [18048.552825] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967262
>> [18048.554933] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967263
>> [18081.808216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967262
>> [18081.808217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967261
>> [18081.808219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967260
>> [18081.808220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967259
>> [18081.819956] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967260
>> [18081.820706] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967261
>> [18110.361331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967260
>> [18110.361332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967259
>> [18160.565496] __add_stripe_bio: md127: start ff2721beec8c2fa0(10792+8) 4294967263
>> [18169.306917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967260
>> [18169.306918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967259
>> [18169.318095] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967260
>> [18169.319212] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967261
>> [18169.394456] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967262
>> [18211.597621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(61800+8) 4294967259
>> [18261.334926] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967260
>> [18261.335380] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967261
>> [18297.192489] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25501361192+8) 4294967259
>> [18332.815982] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967262
>> [18332.816950] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967263
>> [18332.819467] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967264
>> [18332.819799] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967265
>> [18332.820819] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967266
>> [18363.308810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967265
>> [18363.308813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967264
>> [18363.308816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967263
>> [18363.308819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967262
>> [18363.308822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967261
>> [18363.308825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967260
>> [18363.308828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967259
>> [18412.849619] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967262
>> [18412.850582] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967263
>> [18412.850911] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967264
>> [18412.851264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967265
>> [18443.565725] __add_stripe_bio: md127: start ff2721beec8c2fa0(28454161000+8) 4294967260
>> [18473.028395] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967260
>> [18473.029608] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967261
>> [18502.557646] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967262
>> [18502.557723] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967263
>> [18502.558152] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967264
>> [18502.558930] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967265
>> [18502.559041] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967266
>> [18502.563022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967265
>> [18502.563024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967264
>> [18502.563025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967263
>> [18502.563026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967262
>> [18502.563027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967261
>> [18502.563029] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967260
>> [18502.563030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967259
>> [18560.133303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331007720+8) 4294967259
>> [18589.564077] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967260
>> [18589.564089] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967261
>> [18589.564670] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967262
>> [18589.565137] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967263
>> [18589.565700] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967264
>> [18589.566003] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967265
>> [18638.817896] __add_stripe_bio: md127: start ff2721beec8c2fa0(331165224+8) 4294967260
>> [18639.851587] __add_stripe_bio: md127: start ff2721beec8c2fa0(4563405800+8) 4294967260
>> [18721.230354] __add_stripe_bio: md127: start ff2721beec8c2fa0(6174129128+8) 4294967260
>> [18753.264400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6174129128+8) 4294967259
>> [18814.918267] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755240+8) 4294967260
>> [18817.035728] __add_stripe_bio: md127: start ff2721beec8c2fa0(331192424+8) 4294967267
>> [18817.037803] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967266
>> [18817.037809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967265
>> [18817.037812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967264
>> [18817.037815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967263
>> [18817.037818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967262
>> [18817.037822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967261
>> [18817.037825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967260
>> [18817.037827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967259
>> [18847.022837] __add_stripe_bio: md127: start ff2721beec8c2fa0(8589935656+8) 4294967260
>> [18931.949431] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530321832+8) 4294967260
>> [19054.852844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967265
>> [19054.852846] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967264
>> [19054.852847] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967263
>> [19054.852849] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967262
>> [19054.852850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967261
>> [19054.852852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967260
>> [19054.852853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967259
>> [19104.480492] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967264
>> [19104.480523] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967265
>> [19162.254665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967263
>> [19162.254666] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967262
>> [19162.254668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967261
>> [19162.254669] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967260
>> [19162.254671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967259
>> [19194.644616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967260
>> [19194.644618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967259
>> [19225.730035] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967260
>> [19225.730135] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967261
>> [19225.730341] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967262
>> [19225.733024] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967263
>> [19225.733509] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967264
>> [19225.799551] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967265
>> [19250.693927] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967266
>> [19251.803761] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967260
>> [19251.805818] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967261
>> [19251.807214] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967262
>> [19251.807230] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967263
>> [19251.807441] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967264
>> [19251.807684] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967265
>> [19284.419215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967265
>> [19284.419218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967264
>> [19284.419221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967263
>> [19284.419222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967262
>> [19284.419225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967261
>> [19284.419228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967260
>> [19284.419230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967259
>> [19324.123801] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967260
>> [19324.124880] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967261
>> [19389.626363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967264
>> [19389.626366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967263
>> [19389.626370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967262
>> [19389.626373] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967261
>> [19389.626376] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967260
>> [19389.626379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967259
>> [19411.000068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967264
>> [19411.000070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967263
>> [19411.000071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967262
>> [19411.000073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967261
>> [19411.000075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967260
>> [19411.000076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967259
>> [19442.885291] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967260
>> [19442.885494] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967261
>> [19442.885496] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967262
>> [19442.885575] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967263
>> [19500.040964] __add_stripe_bio: md127: start ff2721beec8c2fa0(536935976+8) 4294967260
>> [19503.516938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967263
>> [19503.516939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967262
>> [19503.516941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967261
>> [19503.516942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967260
>> [19503.516944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967259
>> [19531.506729] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967261
>> [19531.507247] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967262
>> [19531.510481] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967263
>> [19559.370264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967262
>> [19559.370268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967261
>> [19559.370272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967260
>> [19559.370275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967259
>> [19590.464792] __add_stripe_bio: md127: start ff2721beec8c2fa0(7788215976+8) 4294967260
>> [19620.633883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(7788215976+8) 4294967259
>> [19650.250748] __add_stripe_bio: md127: start ff2721beec8c2fa0(536913192+8) 4294967260
>> [19680.643891] __add_stripe_bio: md127: start ff2721beec8c2fa0(20135746152+8) 4294967260
>> [19708.804030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(20135746152+8) 4294967259
>> [19737.574540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967260
>> [19737.574543] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967259
>> [19765.378569] __add_stripe_bio: md127: start ff2721beec8c2fa0(536900904+8) 4294967261
>> [19794.831033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967260
>> [19821.381894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967259
>> [19821.429688] __add_stripe_bio: md127: start ff2721beec8c2fa0(536898024+8) 4294967264
>> [19856.960152] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967260
>> [19856.964598] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967261
>> [19856.967055] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967262
>> [19879.048926] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967263
>> [19879.048937] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967264
>> [19887.395626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967263
>> [19887.395631] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967262
>> [19887.395634] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967261
>> [19887.395637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967260
>> [19887.395639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967259
>> [19887.406610] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967260
>> [19916.087911] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967261
>> [19918.951492] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967260
>> [19947.259645] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967261
>> [19983.717648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967260
>> [19983.717650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967259
>> [19983.723154] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
>> [19983.723284] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967261
>> [19983.723330] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967262
>> [19983.723447] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967263
>> [20015.225720] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967264
>> [20015.225737] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967265
>> [20015.233248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967264
>> [20015.233249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967263
>> [20015.233250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967262
>> [20015.233251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967261
>> [20015.233252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967260
>> [20015.233253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967259
>> [20039.634420] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
>> [20059.881519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967263
>> [20059.881521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967262
>> [20059.881522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967261
>> [20059.881523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967260
>> [20059.881524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967259
>> [20091.703960] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967260
>> [20091.704062] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967261
>> [20091.704130] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967262
>> [20091.704371] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967263
>> [20091.704597] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967264
>> [20091.705014] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967265
>> [20091.705043] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967266
>> [20091.705080] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967267
>> [20107.172534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967266
>> [20107.416045] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967265
>> [20107.659569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967264
>> [20107.903083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967263
>> [20108.146684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967262
>> [20108.390242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967261
>> [20108.633740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967260
>> [20108.877229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967259
>> [20125.925086] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967260
>> [20125.925103] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967261
>> [20128.394916] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967262
>> [20128.583655] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967263
>> [20132.751983] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967264
>> [20138.332744] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967260
>> [20138.333973] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967261
>> [20138.334178] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967262
>> [20138.335009] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967263
>> [20138.335115] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967264
>> [20138.335265] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967265
>> [20138.338858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967264
>> [20138.338860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967263
>> [20138.338862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967262
>> [20138.338864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967261
>> [20138.338866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967260
>> [20138.338868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967259
>> [20166.832981] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967260
>> [20166.833229] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967261
>> [20166.834027] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967262
>> [20196.134888] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967263
>> [20199.500306] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967263
>> [20199.500310] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967262
>> [20199.500313] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967261
>> [20199.500317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967260
>> [20199.500321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
>> [20199.942600] __add_stripe_bio: md127: start ff2721beec8c2fa0(555373992+8) 4294967260
>> [20199.944367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
>> [20245.088563] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967260
>> [20245.088642] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967261
>> [20245.088687] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967262
>> [20245.088777] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967263
>> [20245.091384] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967264
>> [20245.091670] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967265
>> [20245.091900] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967266
>> [20245.092055] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967267
>> [20271.055283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967266
>> [20271.064573] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967265
>> [20271.073864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967264
>> [20271.083165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967263
>> [20275.933633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967262
>> [20275.942927] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967261
>> [20275.952228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967260
>> [20275.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967259
>> [20280.815537] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967260
>> [20280.816905] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967261
>> [20442.104270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967260
>> [20443.448533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967259
>> [20445.747762] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967260
>> [20445.747900] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967261
>> [20445.747918] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967262
>> [20445.748615] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967263
>> [20494.667635] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967264
>> [20494.667769] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967265
>> [20524.978466] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967264
>> [20524.987753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967263
>> [20524.997049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967262
>> [20525.006349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967261
>> [20533.505202] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967260
>> [20533.514488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967259
>> [20535.464796] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967260
>> [20535.465312] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967261
>> [20547.361843] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967262
>> [20547.362543] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967263
>> [20547.362994] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967264
>> [20565.098049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967263
>> [20565.098051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967262
>> [20565.098052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967261
>> [20565.098054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967260
>> [20565.098055] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967259
>> [20565.099574] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
>> [20565.099733] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
>> [20565.099960] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
>> [20609.002609] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
>> [20609.011900] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
>> [20609.021187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
>> [20612.483895] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
>> [20612.484023] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
>> [20612.484674] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
>> [20641.495298] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
>> [20641.504590] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
>> [20641.513885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-03 15:54                                                                           ` Christian Theune
@ 2024-11-03 16:16                                                                             ` Dragan Milivojević
  0 siblings, 0 replies; 88+ messages in thread
From: Dragan Milivojević @ 2024-11-03 16:16 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	yukuai (C)

On Sun, 3 Nov 2024 at 16:54, Christian Theune <ct@flyingcircus.io> wrote:
>
> Hi,
>
> running without the debug patch again on 6.11.5 I’m still able to reproduce with bitmap enabled. I’ve gathered the full list of all stuck tasks.
>
> Just as a reminder on the setup, the layering here is:
>
> nvme drives → mdraid → lvm → dmcrypt → xfs
>

If you have the time (this is just a hunch) could you test with
--bitmap-chunk=2G

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-01  8:33                                                                         ` Christian Theune
  2024-11-03 15:54                                                                           ` Christian Theune
@ 2024-11-04 11:29                                                                           ` Yu Kuai
  2024-11-04 11:51                                                                             ` Christian Theune
  2024-11-04 11:40                                                                           ` Yu Kuai
  2 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-04 11:29 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/01 16:33, Christian Theune 写道:
> A thought about the high numbers: they look like relatively regular “many bits on” patterns:
> 
>>>> bin(4294967264)
> ‘0b11111111111111111111111111100000'
> 
> I think I enabled the bitmap online so MAYBE it circumvents initialisation of some memory instead of when this gets up during a regular boot. I’m not seeing those numbers anymore after booting cleanly.
> 
> I’m letting things run for a few hours again, but I’m still concerned that the printk overhead doesn’t allow the system to actually trigger the issue again.
> 
> If I’m not getting things stuck again, then I’ll double check to ensure that my reproduce is still valid on 6.11.5 without the debugging patch.
> 
> Christian
> 
>> On 1. Nov 2024, at 08:56, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Hi,
>>
>> ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.
>>
>> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
>>
>> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.

I can add a checking and trigger a BUG_ON() if you're fine, this way log
should be much less.

Thanks,
Kuai

>>
>> [  405.347345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22301786792+8) 4294967259
>> [  432.542465] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  432.542469] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.272964] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.273175] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.273189] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.273285] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967265
>> [  434.274063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.274066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.274070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.274073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.274078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  434.274083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967259
>> [  434.276609] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  434.278939] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967261
>> [  464.922354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  464.931833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967259
>> [  466.964557] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  466.964616] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  474.399930] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  474.451451] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967263
>> [  489.447079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  489.456574] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  489.466069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  489.475565] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967259
>> [  491.235517] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967260
>> [  491.235602] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967261
>> [  498.153108] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  498.156307] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967263
>> [  530.332619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  530.342110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967261
>> [  530.351595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967260
>> [  530.361082] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967259
>> [  535.176774] __add_stripe_bio: md127: start ff2721beec8c2fa0(24985208424+8) 4294967260
>> [  549.125326] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24985208424+8) 4294967259
>> [  549.635782] __add_stripe_bio: md127: start ff2721beec8c2fa0(25521770024+8) 4294967261
>> [  590.875593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967260
>> [  590.885081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967259
>> [  596.973863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967263
>> [  596.973866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967262
>> [  596.973869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967261
>> [  596.973871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967260
>> [  596.973881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967259
>> [  596.974557] __add_stripe_bio: md127: start ff2721beec8c2fa0(26325099752+8) 4294967260
>> [  637.646142] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26325099752+8) 4294967259
>> [  641.292887] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032741096+8) 4294967260
>> [  654.931195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032741096+8) 4294967259
>> [  654.933295] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967260
>> [  654.933570] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967261
>> [  654.935967] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967262
>> [  654.937411] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967263
>> [  683.008873] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592771048+8) 4294967264
>> [  685.689494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967263
>> [  685.689496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967262
>> [  685.689498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967261
>> [  685.689499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967260
>> [  685.689501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592771048+8) 4294967259
>> [  685.690260] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967260
>> [  685.692999] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967261
>> [  685.693119] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967262
>> [  685.693124] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967263
>> [  685.693427] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967264
>> [  685.693428] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967265
>> [  685.693517] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967266
>> [  685.693528] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592774184+8) 4294967267
>> [  713.684556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967266
>> [  713.694044] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967265
>> [  713.703539] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967264
>> [  713.713026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967263
>> [  713.722512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967262
>> [  713.731996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967261
>> [  713.741480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967260
>> [  713.750962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592774184+8) 4294967259
>> [  715.765954] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967260
>> [  715.766034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967261
>> [  715.766278] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967262
>> [  715.766305] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967263
>> [  715.766468] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967264
>> [  716.077253] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592775464+8) 4294967265
>> [  731.258391] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  731.258401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967261
>> [  731.258584] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967262
>> [  731.258711] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967263
>> [  731.260991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967264
>> [  731.261318] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967265
>> [  731.261513] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967266
>> [  758.285428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967265
>> [  758.294912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967264
>> [  758.304396] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967263
>> [  758.313881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967262
>> [  758.323377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967261
>> [  758.332875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  758.342365] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
>> [  758.922198] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592780072+8) 4294967260
>> [  780.668347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592780072+8) 4294967259
>> [  780.957247] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967260
>> [  780.957393] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967261
>> [  780.957440] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967262
>> [  780.957616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967263
>> [  780.957675] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967264
>> [  780.957754] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967265
>> [  790.623177] __add_stripe_bio: md127: start ff2721beec8c2fa0(26592782888+8) 4294967266
>> [  828.374094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967265
>> [  828.383581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967264
>> [  828.393067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967263
>> [  828.402553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967262
>> [  828.412040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967261
>> [  828.421525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967260
>> [  828.431012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26592782888+8) 4294967259
>> [  830.477927] __add_stripe_bio: md127: start ff2721beec8c2fa0(13690207080+8) 4294967260
>> [  851.040449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(13690207080+8) 4294967259
>> [  851.762678] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967260
>> [  851.762837] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967261
>> [  851.762948] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967262
>> [  851.763032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967263
>> [  851.763068] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967264
>> [  851.763112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967265
>> [  851.763202] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967266
>> [  851.766405] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861766952+8) 4294967267
>> [  851.768763] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967266
>> [  851.768766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967265
>> [  851.768768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967264
>> [  851.768770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967263
>> [  851.768773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967262
>> [  851.768775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967261
>> [  851.768778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967260
>> [  851.768780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861766952+8) 4294967259
>> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967261
>> [  851.769437] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967260
>> [  880.058982] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967262
>> [  880.059032] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967263
>> [  880.059090] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967264
>> [  880.059140] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967265
>> [  880.059317] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768360+8) 4294967266
>> [  891.735497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967265
>> [  891.744974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967264
>> [  891.754455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967263
>> [  891.763939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967262
>> [  891.773422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967261
>> [  891.782975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967260
>> [  891.792469] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861768360+8) 4294967259
>> [  897.108788] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967260
>> [  897.108789] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967261
>> [  897.108813] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967262
>> [  897.108823] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967263
>> [  903.693112] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967264
>> [  904.663454] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861768872+8) 4294967265
>> [  906.906830] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967260
>> [  906.908087] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967261
>> [  906.908508] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967262
>> [  906.910088] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967263
>> [  906.912093] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967264
>> [  906.912840] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967265
>> [  906.914294] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967266
>> [  906.914323] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861770984+8) 4294967267
>> [  906.914806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967266
>> [  906.914808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967265
>> [  906.914809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967264
>> [  906.914810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967263
>> [  906.914811] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967262
>> [  906.914813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967261
>> [  906.914815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967260
>> [  906.914817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861770984+8) 4294967259
>> [  934.849642] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861773736+8) 4294967261
>> [  934.854037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967260
>> [  934.854040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861773736+8) 4294967259
>> [  934.855808] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967260
>> [  934.855945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967261
>> [  963.315203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967262
>> [  963.315320] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967263
>> [  963.315327] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967264
>> [  963.315499] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861776680+8) 4294967265
>> [  982.866693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967264
>> [  982.876178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967263
>> [  982.885665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967262
>> [  982.895158] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967261
>> [  982.904644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967260
>> [  982.914129] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861776680+8) 4294967259
>> [  990.121616] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967260
>> [  990.121662] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967261
>> [  990.121768] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967262
>> [  990.121828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967263
>> [  990.121843] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861777832+8) 4294967264
>> [ 1013.206756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967263
>> [ 1013.206757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967262
>> [ 1013.206758] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967261
>> [ 1013.206759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967260
>> [ 1013.224363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861777832+8) 4294967259
>> [ 1032.134913] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967260
>> [ 1032.134928] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967261
>> [ 1032.135028] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967262
>> [ 1032.135078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967263
>> [ 1041.027196] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967264
>> [ 1041.027321] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967265
>> [ 1041.027485] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967266
>> [ 1057.623365] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861781032+8) 4294967267
>> [ 1076.893035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967266
>> [ 1076.902520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967265
>> [ 1076.912004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967264
>> [ 1076.921490] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967263
>> [ 1076.930986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967262
>> [ 1076.940475] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967261
>> [ 1076.949962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967260
>> [ 1076.959446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861781032+8) 4294967259
>> [ 1077.721459] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967260
>> [ 1077.721615] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967261
>> [ 1077.721706] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967262
>> [ 1077.721739] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967263
>> [ 1077.721765] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861784872+8) 4294967264
>> [ 1110.833257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967263
>> [ 1110.842743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967262
>> [ 1110.852225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967261
>> [ 1110.861709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967260
>> [ 1110.871194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861784872+8) 4294967259
>> [ 1112.052569] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967260
>> [ 1112.052666] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967261
>> [ 1112.052695] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967262
>> [ 1112.052727] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967263
>> [ 1112.052778] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967264
>> [ 1112.053637] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967265
>> [ 1112.053649] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861786920+8) 4294967266
>> [ 1173.829738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967265
>> [ 1173.839223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967264
>> [ 1173.848709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967263
>> [ 1173.858195] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967262
>> [ 1173.867683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967261
>> [ 1173.877167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967260
>> [ 1173.886654] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861789672+8) 4294967259
>> [ 1176.428651] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967260
>> [ 1176.428940] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967263
>> [ 1176.428942] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967264
>> [ 1176.428939] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967262
>> [ 1176.428903] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967261
>> [ 1176.429040] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792872+8) 4294967265
>> [ 1191.700497] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967265
>> [ 1191.704134] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967266
>> [ 1191.704199] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861797928+8) 4294967267
>> [ 1191.705804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967266
>> [ 1191.705808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967265
>> [ 1191.705809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967264
>> [ 1191.705812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967263
>> [ 1191.705815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967262
>> [ 1191.705817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967261
>> [ 1191.705819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967260
>> [ 1191.705821] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861797928+8) 4294967259
>> [ 1191.810863] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861792488+8) 4294967260
>> [ 1244.235788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27917293544+8) 4294967260
>> [ 1309.535319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967264
>> [ 1309.544810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967263
>> [ 1309.554303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967262
>> [ 1309.563787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967261
>> [ 1309.573272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967260
>> [ 1309.582759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861804392+8) 4294967259
>> [ 1314.950362] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967262
>> [ 1314.950455] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967263
>> [ 1314.950457] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967264
>> [ 1314.950470] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861809064+8) 4294967265
>> [ 1345.736319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967264
>> [ 1345.745804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967263
>> [ 1345.755290] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967262
>> [ 1345.764773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967261
>> [ 1345.774264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967260
>> [ 1345.783759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861809064+8) 4294967259
>> [ 1346.823135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28462541160+8) 4294967259
>> [ 1346.824776] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967260
>> [ 1346.824799] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967261
>> [ 1346.824806] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967262
>> [ 1346.824922] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967263
>> [ 1346.825566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814312+8) 4294967264
>> [ 1373.560546] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861814568+8) 4294967260
>> [ 1431.650090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861814568+8) 4294967259
>> [ 1468.944088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967265
>> [ 1468.953581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967264
>> [ 1468.963067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967263
>> [ 1468.972552] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967262
>> [ 1468.982036] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967261
>> [ 1468.991524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967260
>> [ 1469.001009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861821608+8) 4294967259
>> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967261
>> [ 1474.904585] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967260
>> [ 1474.904634] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967262
>> [ 1474.904752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967263
>> [ 1474.904798] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967264
>> [ 1474.904805] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967265
>> [ 1477.837716] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861825384+8) 4294967266
>> [ 1479.836591] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967260
>> [ 1479.858896] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967261
>> [ 1479.859238] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967262
>> [ 1479.859525] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967263
>> [ 1479.859669] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967264
>> [ 1479.859897] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967265
>> [ 1479.860071] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861828456+8) 4294967266
>> [ 1507.386887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967265
>> [ 1507.396375] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967264
>> [ 1507.405858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967263
>> [ 1507.415343] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967262
>> [ 1507.424831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967261
>> [ 1507.434322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967260
>> [ 1507.443826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861828456+8) 4294967259
>> [ 1569.056325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967264
>> [ 1569.065837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967263
>> [ 1569.075325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967262
>> [ 1569.084816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967261
>> [ 1569.094308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967260
>> [ 1569.103801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861834088+8) 4294967259
>> [ 1571.985752] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967260
>> [ 1571.985858] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967261
>> [ 1571.985888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967263
>> [ 1571.985864] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967262
>> [ 1571.985962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967264
>> [ 1571.986450] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861838056+8) 4294967265
>> [ 1582.882338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967264
>> [ 1582.882340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967263
>> [ 1582.882342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967262
>> [ 1582.882344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967261
>> [ 1582.882345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967260
>> [ 1582.882346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861838056+8) 4294967259
>> [ 1582.884560] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967260
>> [ 1582.884860] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967261
>> [ 1582.884880] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967262
>> [ 1582.885034] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967263
>> [ 1582.885126] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967264
>> [ 1582.885164] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861843304+8) 4294967265
>> [ 1675.519030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967264
>> [ 1675.528518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967263
>> [ 1675.538016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967262
>> [ 1675.547513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967261
>> [ 1675.556999] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967260
>> [ 1675.566485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861843304+8) 4294967259
>> [ 1682.250399] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967260
>> [ 1682.250639] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967261
>> [ 1682.250690] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967262
>> [ 1682.250718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967263
>> [ 1682.250974] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967264
>> [ 1682.251078] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967265
>> [ 1682.251306] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861848168+8) 4294967266
>> [ 1704.298207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967265
>> [ 1704.298211] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967264
>> [ 1704.298214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967263
>> [ 1704.298216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967262
>> [ 1704.298218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967261
>> [ 1704.298220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967260
>> [ 1704.298222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861848168+8) 4294967259
>> [ 1704.299566] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967260
>> [ 1704.299718] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967261
>> [ 1704.299758] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967262
>> [ 1704.299834] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967263
>> [ 1704.299888] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967264
>> [ 1704.304001] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967265
>> [ 1704.304132] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861852968+8) 4294967266
>> [ 1772.283712] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967264
>> [ 1772.293200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967263
>> [ 1772.302685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967262
>> [ 1772.312169] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967261
>> [ 1772.321652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967260
>> [ 1772.331135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861854696+8) 4294967259
>> [ 1776.549687] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967260
>> [ 1776.549697] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967261
>> [ 1776.549898] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967263
>> [ 1776.549945] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967264
>> [ 1776.549962] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967265
>> [ 1776.549828] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967262
>> [ 1776.550033] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967266
>> [ 1776.550080] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861858856+8) 4294967267
>> [ 1781.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967266
>> [ 1782.080461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967265
>> [ 1782.199404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967264
>> [ 1782.318346] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967263
>> [ 1782.438150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967262
>> [ 1782.557963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967261
>> [ 1782.677762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967260
>> [ 1782.797570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861858856+8) 4294967259
>> [ 1786.992892] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967261
>> [ 1786.992878] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967260
>> [ 1786.993259] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967262
>> [ 1786.993401] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967263
>> [ 1786.993449] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967264
>> [ 1795.858021] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967266
>> [ 1795.858009] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967265
>> [ 1795.858180] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861859880+8) 4294967267
>> [ 1805.164880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967266
>> [ 1805.174370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967265
>> [ 1805.183853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967264
>> [ 1805.193339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967263
>> [ 1805.202828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967262
>> [ 1805.212314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967261
>> [ 1805.221800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967260
>> [ 1805.231291] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861859880+8) 4294967259
>> [ 1807.730968] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967261
>> [ 1807.730937] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967260
>> [ 1807.731203] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967262
>> [ 1807.731267] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967263
>> [ 1807.731406] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967264
>> [ 1807.731542] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967265
>> [ 1807.731764] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861861672+8) 4294967266
>> [ 1893.800189] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967263
>> [ 1893.809691] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967262
>> [ 1893.819186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967261
>> [ 1893.828675] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967260
>> [ 1893.838165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861869864+8) 4294967259
>> [ 1897.304170] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967260
>> [ 1897.304333] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967261
>> [ 1897.304579] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967262
>> [ 1897.304721] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967263
>> [ 1897.304812] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967264
>> [ 1897.304978] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861873064+8) 4294967265
>> [ 1910.883901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861873064+8) 4294967259
>> [ 1910.888991] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967262
>> [ 1910.888995] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967264
>> [ 1910.888988] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967261
>> [ 1910.888993] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967263
>> [ 1910.888986] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861876200+8) 4294967260
>> [ 1990.952649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967264
>> [ 1990.952651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967263
>> [ 1990.952653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967262
>> [ 1990.952655] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967261
>> [ 1990.952657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967260
>> [ 1990.952659] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861876200+8) 4294967259
>> [ 1990.957010] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967260
>> [ 1990.957011] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967261
>> [ 1990.957016] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967262
>> [ 1990.957020] __add_stripe_bio: md127: start ff2721beec8c2fa0(26861883368+8) 4294967263
>> [ 2021.437780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967262
>> [ 2021.437782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967261
>> [ 2021.437783] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967260
>> [ 2021.437785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26861883368+8) 4294967259
>> [ 2021.442407] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967260
>> [ 2021.443820] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967261
>> [ 2045.539668] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967262
>> [ 2045.540142] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967263
>> [ 2045.540232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967264
>> [ 2045.540262] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967265
>> [ 2050.125201] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130481192+8) 4294967266
>> [ 2057.875279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967265
>> [ 2057.884767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967264
>> [ 2057.894262] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967263
>> [ 2057.903753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967262
>> [ 2057.913237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967261
>> [ 2057.922722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967260
>> [ 2057.932205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130481192+8) 4294967259
>> [ 2059.233074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967260
>> [ 2059.233116] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967261
>> [ 2059.233120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967262
>> [ 2059.233171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967263
>> [ 2059.233632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967264
>> [ 2059.233684] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967265
>> [ 2059.235328] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967266
>> [ 2059.235336] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130482408+8) 4294967267
>> [ 2059.238433] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967266
>> [ 2059.238435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967265
>> [ 2059.238436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967264
>> [ 2059.238437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967263
>> [ 2059.238439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967262
>> [ 2059.238440] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967261
>> [ 2059.238441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967260
>> [ 2059.238443] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130482408+8) 4294967259
>> [ 2090.648331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967260
>> [ 2090.648399] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967261
>> [ 2090.648402] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967262
>> [ 2090.648414] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967263
>> [ 2090.648428] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967264
>> [ 2090.648540] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967265
>> [ 2090.651017] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967266
>> [ 2090.700177] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130483880+8) 4294967267
>> [ 2118.173167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967266
>> [ 2118.182657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967265
>> [ 2118.192147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967264
>> [ 2118.201638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967263
>> [ 2118.211138] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967262
>> [ 2118.220626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967261
>> [ 2118.230111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967260
>> [ 2118.239602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130483880+8) 4294967259
>> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967260
>> [ 2119.232574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967261
>> [ 2119.232691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967262
>> [ 2119.232707] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967263
>> [ 2119.232880] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967264
>> [ 2119.232926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967265
>> [ 2119.232990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967266
>> [ 2119.233054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130486888+8) 4294967267
>> [ 2146.417917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967266
>> [ 2151.981747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967265
>> [ 2152.121412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967264
>> [ 2152.261047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967263
>> [ 2152.400699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967262
>> [ 2152.540356] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967261
>> [ 2152.680005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967260
>> [ 2152.819653] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130486888+8) 4294967259
>> [ 2157.037069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967260
>> [ 2157.037363] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967261
>> [ 2157.037405] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967262
>> [ 2157.037421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967263
>> [ 2157.037446] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130488808+8) 4294967264
>> [ 2214.237201] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967263
>> [ 2214.246685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967262
>> [ 2214.256174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967261
>> [ 2214.265665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967260
>> [ 2214.275152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130488808+8) 4294967259
>> [ 2220.022835] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967260
>> [ 2220.022859] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967261
>> [ 2220.022876] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967262
>> [ 2220.022912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967263
>> [ 2220.023258] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967265
>> [ 2220.023161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130492776+8) 4294967264
>> [ 2243.495792] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130495656+8) 4294967264
>> [ 2272.045830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967265
>> [ 2272.045833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967264
>> [ 2272.045835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967263
>> [ 2272.045837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967262
>> [ 2272.045838] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967261
>> [ 2272.045840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967260
>> [ 2272.045841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130499624+8) 4294967259
>> [ 2302.557785] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967266
>> [ 2302.557787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967265
>> [ 2302.557789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967264
>> [ 2302.557791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967263
>> [ 2302.557793] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967262
>> [ 2302.557796] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967261
>> [ 2302.557797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967260
>> [ 2302.557799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130503336+8) 4294967259
>> [ 2302.561904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967260
>> [ 2302.561926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967261
>> [ 2302.561957] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967263
>> [ 2302.561933] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967262
>> [ 2302.562006] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967264
>> [ 2302.562203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967265
>> [ 2302.562232] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967266
>> [ 2302.562597] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130505320+8) 4294967267
>> [ 2329.647721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967266
>> [ 2329.738196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967265
>> [ 2329.828677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967264
>> [ 2329.919153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967263
>> [ 2330.009644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967262
>> [ 2330.100125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967261
>> [ 2330.190603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967260
>> [ 2330.281085] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130505320+8) 4294967259
>> [ 2332.172188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967265
>> [ 2332.172198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967264
>> [ 2332.172233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967263
>> [ 2332.172242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967262
>> [ 2332.172255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967261
>> [ 2332.172264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967260
>> [ 2332.172278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130506664+8) 4294967259
>> [ 2332.178323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967262
>> [ 2332.178317] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967261
>> [ 2332.178310] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967260
>> [ 2332.178326] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967263
>> [ 2332.178394] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967264
>> [ 2332.178580] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967265
>> [ 2332.178600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967266
>> [ 2332.178697] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130508328+8) 4294967267
>> [ 2358.527771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967266
>> [ 2358.657096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967265
>> [ 2358.786383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967264
>> [ 2358.915693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967263
>> [ 2359.044994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967262
>> [ 2359.174296] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967261
>> [ 2359.303592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967260
>> [ 2359.432875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130508328+8) 4294967259
>> [ 2367.401519] __add_stripe_bio: md127: start ff2721beec8c2fa0(27111972904+8) 4294967260
>> [ 2367.410065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27111972904+8) 4294967259
>> [ 2399.790368] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967260
>> [ 2403.855440] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967261
>> [ 2403.855574] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967262
>> [ 2403.855636] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967263
>> [ 2403.855687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130512104+8) 4294967264
>> [ 2478.513548] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967263
>> [ 2478.523034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967262
>> [ 2478.532518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967261
>> [ 2478.542003] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967260
>> [ 2478.551487] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130512104+8) 4294967259
>> [ 2483.294420] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967264
>> [ 2483.294422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967263
>> [ 2483.294423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967262
>> [ 2483.294425] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967261
>> [ 2483.294426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967260
>> [ 2483.294428] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130518312+8) 4294967259
>> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967260
>> [ 2515.139576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967261
>> [ 2515.139584] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967262
>> [ 2515.139592] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967264
>> [ 2515.139587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967263
>> [ 2515.139593] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967265
>> [ 2515.139795] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130521064+8) 4294967266
>> [ 2556.408113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967264
>> [ 2556.417600] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967263
>> [ 2556.427088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967262
>> [ 2556.436578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967261
>> [ 2556.446066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967260
>> [ 2556.455551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130524584+8) 4294967259
>> [ 2559.815547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967260
>> [ 2559.815563] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967261
>> [ 2559.815793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967262
>> [ 2559.815874] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967263
>> [ 2559.816031] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967264
>> [ 2568.256111] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967265
>> [ 2568.256157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130528296+8) 4294967266
>> [ 2619.422458] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967265
>> [ 2619.431942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967264
>> [ 2619.441449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967263
>> [ 2619.450948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967262
>> [ 2619.460435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967261
>> [ 2619.469921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967260
>> [ 2619.479412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130530472+8) 4294967259
>> [ 2633.472211] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967261
>> [ 2633.472205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967260
>> [ 2633.472305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967262
>> [ 2633.472427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967264
>> [ 2633.472417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967263
>> [ 2633.472587] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967265
>> [ 2633.539700] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130534440+8) 4294967266
>> [ 2661.223491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967265
>> [ 2661.223493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967264
>> [ 2661.223494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967263
>> [ 2661.223496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967262
>> [ 2661.223497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967261
>> [ 2661.223498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967260
>> [ 2661.223500] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130534440+8) 4294967259
>> [ 2661.228040] __add_stripe_bio: md127: start ff2721beec8c2fa0(539699176+8) 4294967260
>> [ 2706.576782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(539699176+8) 4294967259
>> [ 2709.937157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967260
>> [ 2709.937171] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967261
>> [ 2709.937316] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967262
>> [ 2709.937650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967263
>> [ 2709.937717] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967264
>> [ 2709.937724] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967265
>> [ 2709.937725] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967266
>> [ 2709.937737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130541416+8) 4294967267
>> [ 2721.462599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967266
>> [ 2721.610877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967265
>> [ 2721.759136] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967264
>> [ 2721.907488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967263
>> [ 2722.055771] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967262
>> [ 2722.204050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967261
>> [ 2722.352331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967260
>> [ 2722.500592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130541416+8) 4294967259
>> [ 2724.772625] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967260
>> [ 2724.772751] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967261
>> [ 2724.772938] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967262
>> [ 2724.772988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967263
>> [ 2724.773119] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130543016+8) 4294967264
>> [ 2754.673788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967263
>> [ 2754.673790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967262
>> [ 2754.673791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967261
>> [ 2754.673794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967260
>> [ 2754.673795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130543016+8) 4294967259
>> [ 2785.394053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967261
>> [ 2785.394056] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967263
>> [ 2785.394059] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967265
>> [ 2785.394054] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967262
>> [ 2785.394058] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967264
>> [ 2785.394050] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967260
>> [ 2785.463401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967266
>> [ 2785.472247] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130547432+8) 4294967267
>> [ 2787.076731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967266
>> [ 2787.076732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967265
>> [ 2787.076734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967264
>> [ 2787.076735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967263
>> [ 2787.076736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967262
>> [ 2787.076738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967261
>> [ 2787.076739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967260
>> [ 2787.076740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130547432+8) 4294967259
>> [ 2808.905214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967265
>> [ 2808.905333] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967266
>> [ 2808.905388] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130550312+8) 4294967267
>> [ 2808.906939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967266
>> [ 2808.906941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967265
>> [ 2808.906943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967264
>> [ 2808.906944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967263
>> [ 2808.906946] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967262
>> [ 2808.906947] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967261
>> [ 2808.906950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967260
>> [ 2808.906952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130550312+8) 4294967259
>> [ 2836.311276] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130552808+8) 4294967260
>> [ 2854.798417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
>> [ 2856.543067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967264
>> [ 2856.543070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967263
>> [ 2856.543073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967262
>> [ 2856.543075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967261
>> [ 2856.543077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967260
>> [ 2856.543079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130552808+8) 4294967259
>> [ 2856.546312] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967260
>> [ 2856.546314] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967261
>> [ 2856.546421] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967262
>> [ 2856.546509] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967263
>> [ 2856.546926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967264
>> [ 2886.489550] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967265
>> [ 2886.489595] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967266
>> [ 2886.489713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130558568+8) 4294967267
>> [ 2897.617989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967266
>> [ 2897.627477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967265
>> [ 2897.636962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967264
>> [ 2897.646444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967263
>> [ 2897.655920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967262
>> [ 2897.665409] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967261
>> [ 2897.674910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967260
>> [ 2897.684404] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130558568+8) 4294967259
>> [ 2899.844282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967266
>> [ 2899.844316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967265
>> [ 2899.844354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967264
>> [ 2899.844382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967263
>> [ 2899.844423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967262
>> [ 2899.844462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967261
>> [ 2899.844516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967260
>> [ 2899.844570] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130560872+8) 4294967259
>> [ 2899.845690] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967260
>> [ 2899.845966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967261
>> [ 2899.846019] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967262
>> [ 2899.846062] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967263
>> [ 2899.846186] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967264
>> [ 2899.846207] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967265
>> [ 2899.846260] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130563048+8) 4294967266
>> [ 2952.891498] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967265
>> [ 2952.900984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967264
>> [ 2952.910478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967263
>> [ 2952.919966] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967262
>> [ 2952.929461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967261
>> [ 2952.938950] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967260
>> [ 2952.948431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130563048+8) 4294967259
>> [ 2955.316494] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967260
>> [ 2955.316704] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967261
>> [ 2955.316809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967262
>> [ 2955.316988] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967263
>> [ 2955.317105] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130566632+8) 4294967264
>> [ 2987.714377] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967263
>> [ 2987.714379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967262
>> [ 2987.714381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967261
>> [ 2987.714383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967260
>> [ 2987.714385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130566632+8) 4294967259
>> [ 2987.719137] __add_stripe_bio: md127: start ff2721beec8c2fa0(1092459240+8) 4294967260
>> [ 3047.110275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967264
>> [ 3047.110276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967263
>> [ 3047.110277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967262
>> [ 3047.110278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967261
>> [ 3047.110279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967260
>> [ 3047.110281] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130574440+8) 4294967259
>> [ 3047.112501] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967260
>> [ 3047.112711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967261
>> [ 3047.112750] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967262
>> [ 3070.186991] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967263
>> [ 3070.187120] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130577896+8) 4294967264
>> [ 3110.127145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967263
>> [ 3110.136633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967262
>> [ 3110.146121] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967261
>> [ 3110.155611] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967260
>> [ 3110.165103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130577896+8) 4294967259
>> [ 3113.362512] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967260
>> [ 3113.362527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967261
>> [ 3113.362737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967262
>> [ 3113.362788] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967264
>> [ 3113.362772] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967263
>> [ 3113.363524] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130581928+8) 4294967265
>> [ 3181.040787] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967264
>> [ 3181.050278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967263
>> [ 3181.059767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967262
>> [ 3181.069267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967261
>> [ 3181.078760] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967260
>> [ 3181.088248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130581928+8) 4294967259
>> [ 3190.006331] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967260
>> [ 3190.006353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967261
>> [ 3190.006523] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967262
>> [ 3190.006526] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967263
>> [ 3190.006576] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967264
>> [ 3190.006604] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967265
>> [ 3190.006676] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130586472+8) 4294967266
>> [ 3222.489157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130590248+8) 4294967259
>> [ 3222.494665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967260
>> [ 3222.494810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967261
>> [ 3222.495400] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967262
>> [ 3222.495460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967263
>> [ 3222.496203] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967264
>> [ 3222.496266] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130593576+8) 4294967265
>> [ 3249.542100] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967264
>> [ 3249.542102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967263
>> [ 3249.542103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967262
>> [ 3249.542105] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967261
>> [ 3249.542107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967260
>> [ 3249.542109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130593576+8) 4294967259
>> [ 3249.547575] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130590440+8) 4294967260
>> [ 3298.070385] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967260
>> [ 3298.070466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967261
>> [ 3298.070767] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967262
>> [ 3298.070824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967263
>> [ 3298.070896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130603176+8) 4294967264
>> [ 3351.191989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967263
>> [ 3351.201478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967262
>> [ 3351.210961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967261
>> [ 3351.220447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967260
>> [ 3351.229931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130603176+8) 4294967259
>> [ 3354.186090] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967260
>> [ 3354.186174] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967261
>> [ 3354.186453] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967262
>> [ 3354.186600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967263
>> [ 3354.186610] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967264
>> [ 3354.186666] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967265
>> [ 3354.186682] __add_stripe_bio: md127: start ff2721beec8c2fa0(27130606312+8) 4294967266
>> [ 3395.962921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967265
>> [ 3395.972408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967264
>> [ 3395.981893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967263
>> [ 3395.991379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967262
>> [ 3396.000863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967261
>> [ 3396.010344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967260
>> [ 3396.019828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27130606312+8) 4294967259
>> [ 3397.783940] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967260
>> [ 3397.783984] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967261
>> [ 3397.784015] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967262
>> [ 3397.784039] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967263
>> [ 3397.784102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967264
>> [ 3397.784112] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967265
>> [ 3397.784206] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967266
>> [ 3397.784239] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399959720+8) 4294967267
>> [ 3407.297456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967266
>> [ 3407.515478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967265
>> [ 3407.733622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967264
>> [ 3407.952516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967263
>> [ 3408.171430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967262
>> [ 3408.390355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967261
>> [ 3408.609270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967260
>> [ 3408.828226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399959720+8) 4294967259
>> [ 3410.118755] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967260
>> [ 3410.118912] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967261
>> [ 3410.119044] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967262
>> [ 3410.119183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967263
>> [ 3410.119190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967264
>> [ 3410.119398] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399965352+8) 4294967265
>> [ 3474.070919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967264
>> [ 3474.080400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967263
>> [ 3474.089879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967262
>> [ 3474.099370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967261
>> [ 3474.108856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967260
>> [ 3474.118342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399965352+8) 4294967259
>> [ 3477.161290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967261
>> [ 3477.161249] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967260
>> [ 3477.161478] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967262
>> [ 3477.161505] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967263
>> [ 3487.704510] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967264
>> [ 3487.704567] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399969512+8) 4294967265
>> [ 3510.992084] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967264
>> [ 3510.992086] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967263
>> [ 3510.992088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967262
>> [ 3510.992089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967261
>> [ 3510.992090] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967260
>> [ 3510.992091] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399969512+8) 4294967259
>> [ 3510.992993] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3510.993007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3550.083557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3550.083561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
>> [ 3555.089305] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3555.089386] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3555.089618] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967262
>> [ 3555.089635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967263
>> [ 3555.089655] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399973352+8) 4294967264
>> [ 3572.781054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967263
>> [ 3572.790547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967262
>> [ 3572.800034] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967261
>> [ 3572.809523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967260
>> [ 3572.819005] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399973352+8) 4294967259
>> [ 3589.172647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967266
>> [ 3589.172750] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967265
>> [ 3589.172860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967264
>> [ 3589.172991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967263
>> [ 3589.173151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967262
>> [ 3589.173314] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967261
>> [ 3589.173391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967260
>> [ 3589.173461] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399976104+8) 4294967259
>> [ 3589.175972] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743848+8) 4294967260
>> [ 3621.769395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032743848+8) 4294967259
>> [ 3623.304014] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399975272+8) 4294967260
>> [ 3686.974958] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399986216+8) 4294967267
>> [ 3722.135513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967266
>> [ 3722.144998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967265
>> [ 3722.154484] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967264
>> [ 3722.163972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967263
>> [ 3722.173455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967262
>> [ 3722.182939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967261
>> [ 3722.192426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967260
>> [ 3722.201912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399986216+8) 4294967259
>> [ 3729.633922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967260
>> [ 3729.634199] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967261
>> [ 3729.634228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967262
>> [ 3729.634351] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967263
>> [ 3729.634466] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967264
>> [ 3737.013926] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967265
>> [ 3737.016635] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399989800+8) 4294967266
>> [ 3761.542817] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967265
>> [ 3761.542819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967264
>> [ 3761.542820] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967263
>> [ 3761.542822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967262
>> [ 3761.542824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967261
>> [ 3761.542826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967260
>> [ 3761.542827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399989800+8) 4294967259
>> [ 3761.545145] __add_stripe_bio: md127: start ff2721beec8c2fa0(4298178536+8) 4294967260
>> [ 3781.220916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(4298178536+8) 4294967259
>> [ 3816.363850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967261
>> [ 3816.363852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967260
>> [ 3816.363853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27399990824+8) 4294967259
>> [ 3816.366775] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967260
>> [ 3816.367295] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967261
>> [ 3816.367301] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967262
>> [ 3816.367544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967263
>> [ 3816.367693] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967264
>> [ 3816.368092] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967265
>> [ 3843.302810] __add_stripe_bio: md127: start ff2721beec8c2fa0(27399995816+8) 4294967266
>> [ 3869.720089] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967261
>> [ 3869.720097] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967262
>> [ 3869.720194] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967263
>> [ 3869.720213] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967264
>> [ 3869.725214] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400001256+8) 4294967265
>> [ 3911.191478] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967264
>> [ 3911.200970] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967263
>> [ 3911.210456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967262
>> [ 3911.219942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967261
>> [ 3911.229426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967260
>> [ 3911.238911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400001256+8) 4294967259
>> [ 3914.293028] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967264
>> [ 3914.293031] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967263
>> [ 3914.293033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967262
>> [ 3914.293035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967261
>> [ 3914.293038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967260
>> [ 3914.293040] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400004328+8) 4294967259
>> [ 3914.295622] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967261
>> [ 3914.295641] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967262
>> [ 3914.295643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967263
>> [ 3914.295621] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967260
>> [ 3914.295871] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400007592+8) 4294967264
>> [ 3999.621383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967263
>> [ 3999.630885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967262
>> [ 3999.640370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967261
>> [ 3999.649865] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967260
>> [ 3999.659351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400007592+8) 4294967259
>> [ 4004.391868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967260
>> [ 4004.391913] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967261
>> [ 4004.392117] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967262
>> [ 4004.392564] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967263
>> [ 4004.392579] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967264
>> [ 4004.392650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967265
>> [ 4004.392858] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400016232+8) 4294967266
>> [ 4074.054694] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967265
>> [ 4074.064183] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967264
>> [ 4074.073668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967263
>> [ 4074.083155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967262
>> [ 4074.092647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967261
>> [ 4074.102149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967260
>> [ 4074.111642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400016232+8) 4294967259
>> [ 4114.890444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967265
>> [ 4114.890445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967264
>> [ 4114.890446] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967263
>> [ 4114.890447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967262
>> [ 4114.890448] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967261
>> [ 4114.890449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967260
>> [ 4114.890450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400020712+8) 4294967259
>> [ 4114.894014] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032743784+8) 4294967260
>> [ 4137.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967260
>> [ 4137.422134] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967261
>> [ 4137.422222] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967262
>> [ 4137.422380] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967263
>> [ 4137.422447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967264
>> [ 4137.422527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967265
>> [ 4137.422809] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400026664+8) 4294967266
>> [ 4195.441417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967265
>> [ 4195.450902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967264
>> [ 4195.460386] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967263
>> [ 4195.469873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967262
>> [ 4195.479360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967261
>> [ 4195.488848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967260
>> [ 4195.498336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400026664+8) 4294967259
>> [ 4205.104432] __add_stripe_bio: md127: start ff2721beec8c2fa0(6458132264+8) 4294967260
>> [ 4270.444837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6458132264+8) 4294967259
>> [ 4282.274860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967265
>> [ 4282.274879] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967264
>> [ 4282.274897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967263
>> [ 4282.274916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967262
>> [ 4282.274936] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967261
>> [ 4282.274955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967260
>> [ 4282.274975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400036904+8) 4294967259
>> [ 4282.276460] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967260
>> [ 4282.276797] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967261
>> [ 4282.276964] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967262
>> [ 4282.277061] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967263
>> [ 4282.277143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967264
>> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967266
>> [ 4282.277191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967265
>> [ 4282.277271] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400039464+8) 4294967267
>> [ 4282.321155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967266
>> [ 4282.387435] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967265
>> [ 4282.448733] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967264
>> [ 4282.448742] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967263
>> [ 4282.448751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967262
>> [ 4282.448759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967261
>> [ 4282.448767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967260
>> [ 4282.448775] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400039464+8) 4294967259
>> [ 4315.061841] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967260
>> [ 4315.061855] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967262
>> [ 4315.061844] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967261
>> [ 4315.061924] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967263
>> [ 4315.061976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967264
>> [ 4315.063503] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400040168+8) 4294967265
>> [ 4382.511212] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967264
>> [ 4382.520702] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967263
>> [ 4382.530180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967262
>> [ 4382.539665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967261
>> [ 4382.549163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967260
>> [ 4382.558657] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400040168+8) 4294967259
>> [ 4387.176732] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967260
>> [ 4387.176821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967261
>> [ 4387.176898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967262
>> [ 4387.177030] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967263
>> [ 4387.177229] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967264
>> [ 4387.177270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400046440+8) 4294967265
>> [ 4457.957710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967264
>> [ 4457.957715] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967263
>> [ 4457.957719] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967262
>> [ 4457.957723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967261
>> [ 4457.957727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967260
>> [ 4457.957731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400046440+8) 4294967259
>> [ 4457.961270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967260
>> [ 4457.961406] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967261
>> [ 4457.961619] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967262
>> [ 4457.961651] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967264
>> [ 4457.961806] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967265
>> [ 4457.961645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400051432+8) 4294967263
>> [ 4485.589613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967264
>> [ 4485.589615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967263
>> [ 4485.589616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967262
>> [ 4485.589617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967261
>> [ 4485.589618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967260
>> [ 4485.589619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400051432+8) 4294967259
>> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967260
>> [ 4485.593052] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967261
>> [ 4485.593288] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967262
>> [ 4485.593416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967263
>> [ 4485.593532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967264
>> [ 4485.593652] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967265
>> [ 4485.593678] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967266
>> [ 4485.850480] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400056936+8) 4294967267
>> [ 4515.537222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967266
>> [ 4515.537223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967265
>> [ 4515.537224] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967264
>> [ 4515.537226] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967263
>> [ 4515.537227] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967262
>> [ 4515.537228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967261
>> [ 4515.537229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967260
>> [ 4515.537231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400056936+8) 4294967259
>> [ 4515.539155] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967260
>> [ 4515.539253] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967261
>> [ 4515.539324] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967262
>> [ 4515.539513] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967263
>> [ 4515.539522] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967264
>> [ 4543.939187] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400058088+8) 4294967265
>> [ 4567.298898] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967264
>> [ 4567.308382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967263
>> [ 4567.317870] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967262
>> [ 4567.327355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967261
>> [ 4567.336851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967260
>> [ 4567.346342] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400058088+8) 4294967259
>> [ 4574.769978] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967260
>> [ 4574.770644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967261
>> [ 4574.770713] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967262
>> [ 4585.659234] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967263
>> [ 4585.659638] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967264
>> [ 4585.659851] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400062760+8) 4294967265
>> [ 4628.062519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967264
>> [ 4628.062521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967263
>> [ 4628.062522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967262
>> [ 4628.062524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967261
>> [ 4628.062525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967260
>> [ 4628.062526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400062760+8) 4294967259
>> [ 4628.067547] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967260
>> [ 4628.067553] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967262
>> [ 4628.067549] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967261
>> [ 4628.067556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967263
>> [ 4628.067558] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967264
>> [ 4628.067643] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967265
>> [ 4628.067650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400067240+8) 4294967266
>> [ 4655.735972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967266
>> [ 4655.738016] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400069224+8) 4294967267
>> [ 4655.740269] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967266
>> [ 4655.740270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967265
>> [ 4655.740271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967264
>> [ 4655.740273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967263
>> [ 4655.740274] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967262
>> [ 4655.740275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967261
>> [ 4655.740277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967260
>> [ 4655.740278] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400069224+8) 4294967259
>> [ 4655.744826] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967260
>> [ 4655.745042] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967261
>> [ 4655.745074] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967262
>> [ 4655.745162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967263
>> [ 4684.693786] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400071144+8) 4294967264
>> [ 4707.657198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967263
>> [ 4707.666685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967262
>> [ 4707.676172] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967261
>> [ 4707.685664] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967260
>> [ 4707.695155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400071144+8) 4294967259
>> [ 4714.104353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967260
>> [ 4714.104370] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967261
>> [ 4714.104532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967262
>> [ 4714.104556] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967263
>> [ 4714.104738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967264
>> [ 4714.104749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967265
>> [ 4714.104870] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400074856+8) 4294967266
>> [ 4767.415146] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967265
>> [ 4767.424642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967264
>> [ 4767.434126] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967263
>> [ 4767.443610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967262
>> [ 4767.453092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967261
>> [ 4767.462576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967260
>> [ 4767.472063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400074856+8) 4294967259
>> [ 4772.506161] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967260
>> [ 4772.506215] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967261
>> [ 4772.506341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967262
>> [ 4772.506665] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967263
>> [ 4772.506877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967264
>> [ 4772.507010] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967265
>> [ 4788.406891] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400078056+8) 4294967266
>> [ 4841.522571] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967260
>> [ 4841.523093] __add_stripe_bio: md127: start ff2721beec8c2fa0(27400085928+8) 4294967261
>> [ 4907.596094] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967260
>> [ 4907.596096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27400085928+8) 4294967259
>> [ 4907.596899] __add_stripe_bio: md127: start ff2721beec8c2fa0(27651747176+8) 4294967260
>> [ 4973.644083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27651747176+8) 4294967259
>> [ 4983.401423] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 4983.401434] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 4983.401439] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667021416+8) 4294967262
>> [ 4983.412449] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 4983.412450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 4983.412452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
>> [ 5009.844830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967263
>> [ 5009.844831] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967262
>> [ 5009.844832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967261
>> [ 5009.844834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967260
>> [ 5009.844835] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667021416+8) 4294967259
>> [ 5036.841498] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967260
>> [ 5036.841516] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967261
>> [ 5036.841589] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967262
>> [ 5036.841711] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967263
>> [ 5036.841866] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967264
>> [ 5036.842021] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667025832+8) 4294967265
>> [ 5065.748527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967264
>> [ 5065.758011] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967263
>> [ 5065.767510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967262
>> [ 5065.776985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967261
>> [ 5065.786473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967260
>> [ 5065.795964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667025832+8) 4294967259
>> [ 5069.198341] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967260
>> [ 5069.198447] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967261
>> [ 5069.198475] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967262
>> [ 5069.198565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967263
>> [ 5069.198600] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967264
>> [ 5069.198657] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967265
>> [ 5069.198719] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967266
>> [ 5069.198749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667029032+8) 4294967267
>> [ 5158.739304] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967265
>> [ 5158.748804] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967264
>> [ 5158.758295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967263
>> [ 5158.767784] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967262
>> [ 5158.777267] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967261
>> [ 5158.786749] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967260
>> [ 5158.796231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667030504+8) 4294967259
>> [ 5174.398776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967260
>> [ 5174.398877] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967261
>> [ 5174.398898] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967262
>> [ 5174.398990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967263
>> [ 5174.399068] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967264
>> [ 5174.399165] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667037928+8) 4294967265
>> [ 5215.123596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967264
>> [ 5215.133088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967263
>> [ 5215.142587] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967262
>> [ 5215.152066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967261
>> [ 5215.161549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967260
>> [ 5215.171035] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667037928+8) 4294967259
>> [ 5223.954743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967260
>> [ 5223.954779] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967261
>> [ 5223.954951] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967262
>> [ 5223.955007] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967263
>> [ 5223.955223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967264
>> [ 5223.955228] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967265
>> [ 5223.955230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967266
>> [ 5223.955472] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667042408+8) 4294967267
>> [ 5259.738014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967266
>> [ 5260.031050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967265
>> [ 5260.324115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967264
>> [ 5260.617186] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967263
>> [ 5260.910270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967262
>> [ 5261.203354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967261
>> [ 5261.496406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967260
>> [ 5261.789480] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667042408+8) 4294967259
>> [ 5265.172862] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967260
>> [ 5265.173244] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967261
>> [ 5265.173322] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967262
>> [ 5265.173763] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967263
>> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967265
>> [ 5265.173927] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967264
>> [ 5265.173928] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967266
>> [ 5265.173973] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667045608+8) 4294967267
>> [ 5294.960280] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967266
>> [ 5295.249813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967265
>> [ 5295.539340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967264
>> [ 5295.829752] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967263
>> [ 5296.120184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967262
>> [ 5296.410623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967261
>> [ 5296.701004] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967260
>> [ 5296.991418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667045608+8) 4294967259
>> [ 5298.908411] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967260
>> [ 5298.908458] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967261
>> [ 5298.908544] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967262
>> [ 5298.908650] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967263
>> [ 5298.908710] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967264
>> [ 5298.909051] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667048360+8) 4294967265
>> [ 5413.856013] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667058856+8) 4294967266
>> [ 5446.671677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967265
>> [ 5446.671679] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967264
>> [ 5446.671680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967263
>> [ 5446.671681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967262
>> [ 5446.671682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967261
>> [ 5446.671683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967260
>> [ 5446.671684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667058856+8) 4294967259
>> [ 5479.015532] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967260
>> [ 5479.015632] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967261
>> [ 5479.015683] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967262
>> [ 5479.015735] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967263
>> [ 5492.731793] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967264
>> [ 5492.731911] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667064808+8) 4294967265
>> [ 5506.007986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967264
>> [ 5506.007988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967263
>> [ 5506.007991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967262
>> [ 5506.007994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967261
>> [ 5506.007997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967260
>> [ 5506.008000] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667064808+8) 4294967259
>> [ 5506.011738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967260
>> [ 5506.011854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967261
>> [ 5506.011861] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967262
>> [ 5506.012133] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967263
>> [ 5506.012143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667070888+8) 4294967264
>> [ 5555.890832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967263
>> [ 5555.900322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967262
>> [ 5555.909852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967261
>> [ 5555.919336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967260
>> [ 5555.928823] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667070888+8) 4294967259
>> [ 5574.002280] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967260
>> [ 5574.002313] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967261
>> [ 5574.002403] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967262
>> [ 5574.002468] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967263
>> [ 5574.002561] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967264
>> [ 5574.002645] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667074728+8) 4294967265
>> [ 5606.796975] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967264
>> [ 5606.796977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967263
>> [ 5606.796978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967262
>> [ 5606.796979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967261
>> [ 5606.796981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967260
>> [ 5606.796982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667074728+8) 4294967259
>> [ 5606.798208] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967260
>> [ 5606.798527] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967261
>> [ 5606.798585] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967262
>> [ 5606.798607] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967263
>> [ 5606.803857] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967264
>> [ 5606.804282] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967265
>> [ 5639.962722] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667079720+8) 4294967266
>> [ 5652.645345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967265
>> [ 5652.654833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967264
>> [ 5652.664323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967263
>> [ 5652.673815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967262
>> [ 5652.683294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967261
>> [ 5652.692781] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967260
>> [ 5654.603101] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667079720+8) 4294967259
>> [ 5654.613230] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967260
>> [ 5654.613572] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967261
>> [ 5654.613687] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967262
>> [ 5654.613814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967263
>> [ 5654.614055] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667080872+8) 4294967264
>> [ 5683.045381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967263
>> [ 5683.045383] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967262
>> [ 5683.045385] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967261
>> [ 5683.045387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967260
>> [ 5683.045388] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667080872+8) 4294967259
>> [ 5683.048586] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967260
>> [ 5683.048965] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967261
>> [ 5683.049073] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967262
>> [ 5683.049140] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967263
>> [ 5683.049162] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967264
>> [ 5683.049196] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967265
>> [ 5683.049256] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967266
>> [ 5683.474474] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667086696+8) 4294967267
>> [ 5723.855633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967266
>> [ 5724.027114] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967265
>> [ 5724.198621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967264
>> [ 5724.370117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967263
>> [ 5724.541614] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967262
>> [ 5724.713065] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967261
>> [ 5724.884518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967260
>> [ 5725.055989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667086696+8) 4294967259
>> [ 5730.407790] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967260
>> [ 5730.407821] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967261
>> [ 5730.407937] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967263
>> [ 5730.407896] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967262
>> [ 5730.408159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967265
>> [ 5730.408143] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967264
>> [ 5730.408353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667087784+8) 4294967266
>> [ 5758.122868] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967260
>> [ 5758.123578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967261
>> [ 5758.123627] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967262
>> [ 5758.130240] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967263
>> [ 5758.130420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967264
>> [ 5758.130534] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667094824+8) 4294967265
>> [ 5846.663594] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967264
>> [ 5846.673081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967263
>> [ 5846.682568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967262
>> [ 5846.692061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967261
>> [ 5846.701549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967260
>> [ 5846.711038] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667098600+8) 4294967259
>> [ 5911.098887] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967265
>> [ 5911.108378] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967264
>> [ 5911.117873] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967263
>> [ 5911.127372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967262
>> [ 5911.136857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967261
>> [ 5911.146341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967260
>> [ 5911.155824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667106344+8) 4294967259
>> [ 5928.422955] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967260
>> [ 5928.422960] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967261
>> [ 5928.422966] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967262
>> [ 5928.422974] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967264
>> [ 5928.422972] __add_stripe_bio: md127: start ff2721beec8c2fa0(27667111592+8) 4294967263
>> [ 6014.681387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967265
>> [ 6014.690871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967264
>> [ 6014.700360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967263
>> [ 6014.709859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967262
>> [ 6014.719355] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967261
>> [ 6014.728844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967260
>> [ 6014.738332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667117352+8) 4294967259
>> [ 6056.379708] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967265
>> [ 6056.389200] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967264
>> [ 6056.398687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967263
>> [ 6056.408178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967262
>> [ 6056.417667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967261
>> [ 6056.427155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967260
>> [ 6056.436640] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27667123432+8) 4294967259
>> [ 6063.789506] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967260
>> [ 6063.789738] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967261
>> [ 6063.789989] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967262
>> [ 6063.790034] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967263
>> [ 6063.790190] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967264
>> [ 6063.790287] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967265
>> [ 6078.230963] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935493416+8) 4294967266
>> [ 6105.353125] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967265
>> [ 6105.362612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967264
>> [ 6105.372093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967263
>> [ 6105.381577] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967262
>> [ 6105.391064] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967261
>> [ 6105.400555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967260
>> [ 6105.410041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935493416+8) 4294967259
>> [ 6128.587354] __add_stripe_bio: md127: start ff2721beec8c2fa0(27920223080+8) 4294967260
>> [ 6201.907683] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27920223080+8) 4294967259
>> [ 6210.113728] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032744680+8) 4294967259
>> [ 6283.753223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967264
>> [ 6283.762710] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967263
>> [ 6283.772194] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967262
>> [ 6283.781677] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967261
>> [ 6283.791163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967260
>> [ 6283.800647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935506280+8) 4294967259
>> [ 6294.185205] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967260
>> [ 6294.185349] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935512488+8) 4294967261
>> [ 6354.564956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967266
>> [ 6354.565008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967265
>> [ 6354.565050] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967264
>> [ 6354.565103] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967263
>> [ 6354.565143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967262
>> [ 6354.565198] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967261
>> [ 6354.565250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967260
>> [ 6354.565295] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935517864+8) 4294967259
>> [ 6354.571582] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967260
>> [ 6354.571613] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967261
>> [ 6354.571614] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967262
>> [ 6354.572095] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967263
>> [ 6381.572101] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967264
>> [ 6381.572150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967265
>> [ 6381.572462] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935519464+8) 4294967266
>> [ 6417.668789] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967265
>> [ 6417.678285] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967264
>> [ 6417.687773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967263
>> [ 6417.697266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967262
>> [ 6417.706822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967261
>> [ 6417.716318] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967260
>> [ 6417.725807] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935519464+8) 4294967259
>> [ 6442.242691] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967260
>> [ 6442.242776] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967261
>> [ 6442.242901] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967262
>> [ 6442.242998] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967263
>> [ 6442.243060] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967264
>> [ 6442.243109] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935524328+8) 4294967265
>> [ 6487.368252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967264
>> [ 6487.368256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967263
>> [ 6487.384984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967262
>> [ 6487.401709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967261
>> [ 6487.418441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967260
>> [ 6487.418447] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935524328+8) 4294967259
>> [ 6512.350543] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967260
>> [ 6512.351290] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967261
>> [ 6512.351395] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967262
>> [ 6512.351419] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967263
>> [ 6512.351565] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967264
>> [ 6512.351578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967265
>> [ 6512.351611] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935529384+8) 4294967266
>> [ 6558.339111] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967265
>> [ 6558.339113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967264
>> [ 6558.339115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967263
>> [ 6558.339118] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967262
>> [ 6558.339120] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967261
>> [ 6558.339122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967260
>> [ 6558.339123] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935529384+8) 4294967259
>> [ 6604.012612] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967265
>> [ 6604.012615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967264
>> [ 6604.012617] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967263
>> [ 6604.012619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967262
>> [ 6604.012622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967261
>> [ 6604.012624] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967260
>> [ 6604.012626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935535336+8) 4294967259
>> [ 6636.116612] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967260
>> [ 6636.117018] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967261
>> [ 6636.117064] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967262
>> [ 6636.117191] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967263
>> [ 6636.117217] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967265
>> [ 6636.117204] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967264
>> [ 6636.117365] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935541672+8) 4294967266
>> [ 6697.656762] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967265
>> [ 6697.666250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967264
>> [ 6697.675731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967263
>> [ 6697.685213] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967262
>> [ 6697.694703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967261
>> [ 6697.704188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967260
>> [ 6697.713685] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935541672+8) 4294967259
>> [ 6699.748818] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967260
>> [ 6699.749045] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967261
>> [ 6699.749350] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967262
>> [ 6699.749488] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967264
>> [ 6699.749487] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967263
>> [ 6699.749673] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967265
>> [ 6700.169570] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935546472+8) 4294967266
>> [ 6714.982644] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967264
>> [ 6714.982749] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935551336+8) 4294967265
>> [ 6752.225916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967264
>> [ 6752.235410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967263
>> [ 6752.244901] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967262
>> [ 6752.254387] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967261
>> [ 6752.263875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967260
>> [ 6752.273361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935551336+8) 4294967259
>> [ 6763.509990] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967260
>> [ 6763.510135] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967261
>> [ 6763.510150] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967262
>> [ 6763.510183] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967263
>> [ 6763.510242] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967264
>> [ 6763.510270] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967265
>> [ 6763.512906] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935556136+8) 4294967266
>> [ 6823.022727] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967265
>> [ 6823.022730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967264
>> [ 6823.022731] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967263
>> [ 6823.022734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967262
>> [ 6823.022735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967261
>> [ 6823.022736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967260
>> [ 6823.022738] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935556136+8) 4294967259
>> [ 6823.024701] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967260
>> [ 6823.024824] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967261
>> [ 6823.025069] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967263
>> [ 6823.024976] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967262
>> [ 6823.025323] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967264
>> [ 6823.025427] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935561512+8) 4294967265
>> [ 6929.234367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967264
>> [ 6929.243863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967263
>> [ 6929.253358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967262
>> [ 6929.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967261
>> [ 6929.272333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967260
>> [ 6929.281822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935561512+8) 4294967259
>> [ 6930.403685] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967260
>> [ 6930.403904] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967261
>> [ 6930.404088] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967262
>> [ 6930.404223] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967263
>> [ 6930.404286] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967264
>> [ 6930.404292] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935567784+8) 4294967265
>> [ 6994.814514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967264
>> [ 6994.824001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967263
>> [ 6994.833494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967262
>> [ 6994.842983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967261
>> [ 6994.852473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967260
>> [ 6994.861960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935567784+8) 4294967259
>> [ 6997.854357] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935572712+8) 4294967265
>> [ 7031.426286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967264
>> [ 7031.435774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967263
>> [ 7031.452511] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967262
>> [ 7031.468341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967261
>> [ 7031.484182] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967260
>> [ 7039.434351] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935572712+8) 4294967259
>> [ 7045.236931] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967260
>> [ 7045.237482] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967261
>> [ 7045.237696] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967262
>> [ 7045.237743] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967263
>> [ 7056.937353] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967264
>> [ 7056.937578] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935578664+8) 4294967265
>> [ 7056.940551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967264
>> [ 7056.940553] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967263
>> [ 7056.940554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967262
>> [ 7056.940555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967261
>> [ 7056.940556] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967260
>> [ 7056.940557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935578664+8) 4294967259
>> [ 7083.864814] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967260
>> [ 7083.865053] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967262
>> [ 7083.865036] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967261
>> [ 7083.865102] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967263
>> [ 7083.865159] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967264
>> [ 7083.964009] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967265
>> [ 7095.497485] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935579560+8) 4294967266
>> [ 7155.158072] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967265
>> [ 7155.158073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967264
>> [ 7155.158074] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967263
>> [ 7155.158076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967262
>> [ 7155.158077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967261
>> [ 7155.158078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967260
>> [ 7155.158079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935579560+8) 4294967259
>> [ 7155.165525] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967260
>> [ 7180.285881] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967261
>> [ 7183.167275] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967262
>> [ 7183.414146] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935586856+8) 4294967263
>> [ 7224.653276] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967265
>> [ 7224.662765] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967264
>> [ 7224.672249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967263
>> [ 7224.681736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967262
>> [ 7224.691228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967261
>> [ 7224.700720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967260
>> [ 7224.710207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935586856+8) 4294967259
>> [ 7229.399854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967260
>> [ 7229.399922] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967261
>> [ 7229.400041] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967262
>> [ 7229.400099] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967263
>> [ 7229.400157] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967264
>> [ 7229.400221] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935593320+8) 4294967265
>> [ 7288.006416] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967260
>> [ 7288.006417] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967261
>> [ 7288.006420] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967262
>> [ 7288.006422] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967263
>> [ 7288.006605] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967264
>> [ 7288.006752] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967265
>> [ 7288.006975] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935599144+8) 4294967266
>> [ 7353.182856] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967261
>> [ 7353.182854] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967260
>> [ 7353.182949] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967262
>> [ 7353.183001] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967263
>> [ 7353.183401] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967264
>> [ 7353.183737] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967266
>> [ 7353.183726] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967265
>> [ 7353.184047] __add_stripe_bio: md127: start ff2721beec8c2fa0(27935607144+8) 4294967267
>> [ 7443.628841] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967264
>> [ 7443.638347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967263
>> [ 7443.647826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967262
>> [ 7443.657311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967261
>> [ 7443.666797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967260
>> [ 7443.676282] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935609128+8) 4294967259
>> [ 7501.172830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967263
>> [ 7501.182322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967262
>> [ 7501.191809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967261
>> [ 7501.201294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967260
>> [ 7501.210778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(27935614952+8) 4294967259
>> [ 7508.208830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967260
>> [ 7508.209597] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967261
>> [ 7508.209670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967262
>> [ 7522.177756] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967263
>> [ 7522.177879] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967264
>> [ 7522.177881] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204178984+8) 4294967265
>> [ 7550.776037] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967264
>> [ 7550.785525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967263
>> [ 7550.795016] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967262
>> [ 7550.804501] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967261
>> [ 7550.813985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967260
>> [ 7550.823470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204178984+8) 4294967259
>> [ 7556.140566] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967260
>> [ 7556.140598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967261
>> [ 7556.140739] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967262
>> [ 7556.140798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967263
>> [ 7556.140931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967264
>> [ 7556.141063] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967265
>> [ 7556.141111] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967266
>> [ 7556.141212] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204189096+8) 4294967267
>> [ 7589.706135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967266
>> [ 7589.991286] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967265
>> [ 7590.277340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967264
>> [ 7590.563347] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967263
>> [ 7590.849389] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967262
>> [ 7591.135445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967261
>> [ 7591.421473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967260
>> [ 7591.707517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204189096+8) 4294967259
>> [ 7606.172838] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204191720+8) 4294967260
>> [ 7703.615017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967263
>> [ 7703.624510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967262
>> [ 7703.634001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967261
>> [ 7703.643491] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967260
>> [ 7703.652977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204191720+8) 4294967259
>> [ 7708.933190] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967260
>> [ 7708.933333] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967261
>> [ 7708.933473] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967262
>> [ 7708.933618] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967263
>> [ 7708.933620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967264
>> [ 7708.933657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967265
>> [ 7708.933663] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204199848+8) 4294967266
>> [ 7758.303406] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967265
>> [ 7758.303407] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967264
>> [ 7758.303408] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967263
>> [ 7758.303410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967262
>> [ 7758.303411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967261
>> [ 7758.303412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967260
>> [ 7758.303413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204199848+8) 4294967259
>> [ 7778.439143] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967260
>> [ 7778.439197] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967261
>> [ 7778.439279] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967262
>> [ 7778.439376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967263
>> [ 7778.439409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967264
>> [ 7778.439494] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204205672+8) 4294967265
>> [ 7858.899558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967264
>> [ 7858.899559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967263
>> [ 7858.899561] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967262
>> [ 7858.899562] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967261
>> [ 7858.899563] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967260
>> [ 7858.899564] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204205672+8) 4294967259
>> [ 7890.583124] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967260
>> [ 7890.583147] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967261
>> [ 7890.583594] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967262
>> [ 7890.583650] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967263
>> [ 7890.584141] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967264
>> [ 7890.584215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967265
>> [ 7890.584351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204213800+8) 4294967266
>> [ 7952.730165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967265
>> [ 7952.739650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967264
>> [ 7952.749137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967263
>> [ 7952.758627] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967262
>> [ 7952.768110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967261
>> [ 7952.777595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967260
>> [ 7952.787077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204213800+8) 4294967259
>> [ 7966.676635] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967260
>> [ 7966.676647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967261
>> [ 7966.676670] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967263
>> [ 7966.676658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967262
>> [ 7966.676686] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967264
>> [ 7966.677634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967265
>> [ 7966.708979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204218216+8) 4294967266
>> [ 8032.243861] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967265
>> [ 8032.253352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967264
>> [ 8032.262845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967263
>> [ 8032.272336] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967262
>> [ 8032.281826] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967261
>> [ 8032.291317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967260
>> [ 8032.300800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204218216+8) 4294967259
>> [ 8043.901514] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967260
>> [ 8043.901516] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967261
>> [ 8043.901563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967262
>> [ 8043.901612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967264
>> [ 8043.901609] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967263
>> [ 8043.907672] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204224168+8) 4294967265
>> [ 8146.468217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967264
>> [ 8146.477717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967263
>> [ 8146.487204] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967262
>> [ 8146.496692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967261
>> [ 8146.506180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967260
>> [ 8146.515671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204224168+8) 4294967259
>> [ 8151.492003] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967260
>> [ 8151.492121] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967261
>> [ 8151.492457] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967262
>> [ 8151.492601] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967264
>> [ 8151.492590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967263
>> [ 8151.492750] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967265
>> [ 8164.821795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204232232+8) 4294967266
>> [ 8200.519377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967260
>> [ 8200.519505] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967261
>> [ 8200.519782] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967262
>> [ 8200.519805] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967263
>> [ 8200.520020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967264
>> [ 8200.520247] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967265
>> [ 8200.520434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967266
>> [ 8200.520558] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204238120+8) 4294967267
>> [ 8231.475052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967266
>> [ 8231.637009] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967265
>> [ 8231.799001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967264
>> [ 8231.960972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967263
>> [ 8232.122948] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967262
>> [ 8232.284905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967261
>> [ 8232.446896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967260
>> [ 8232.608893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204238120+8) 4294967259
>> [ 8251.913368] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967260
>> [ 8251.913382] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967263
>> [ 8251.913377] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967261
>> [ 8251.913388] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967265
>> [ 8251.913387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967264
>> [ 8251.913379] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204240808+8) 4294967262
>> [ 8302.630262] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967261
>> [ 8302.630318] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967263
>> [ 8302.630275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967262
>> [ 8302.630250] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967260
>> [ 8302.630745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204247848+8) 4294967264
>> [ 8377.488095] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967263
>> [ 8377.497581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967262
>> [ 8377.507068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967261
>> [ 8377.516554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967260
>> [ 8377.526049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204247848+8) 4294967259
>> [ 8382.868830] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967260
>> [ 8382.868911] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967261
>> [ 8382.869109] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967262
>> [ 8382.869323] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967263
>> [ 8382.983582] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204252072+8) 4294967264
>> [ 8407.895450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967263
>> [ 8407.895452] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967262
>> [ 8407.895455] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967261
>> [ 8407.895457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967260
>> [ 8407.895459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204252072+8) 4294967259
>> [ 8454.553018] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967264
>> [ 8454.570223] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967263
>> [ 8454.587423] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967262
>> [ 8454.587426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967261
>> [ 8454.604638] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967260
>> [ 8454.621845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204257768+8) 4294967259
>> [ 8498.349228] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967261
>> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967262
>> [ 8498.349217] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967260
>> [ 8498.349235] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967263
>> [ 8498.349317] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967264
>> [ 8498.349366] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967265
>> [ 8517.904808] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204263784+8) 4294967266
>> [ 8551.340824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967265
>> [ 8551.350319] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967264
>> [ 8551.359809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967263
>> [ 8551.369300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967262
>> [ 8551.378788] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967261
>> [ 8551.388272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967260
>> [ 8551.397756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204263784+8) 4294967259
>> [ 8599.114364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967265
>> [ 8599.114366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967264
>> [ 8599.114367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967263
>> [ 8599.114368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967262
>> [ 8599.114370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967261
>> [ 8599.114371] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967260
>> [ 8599.114372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204268840+8) 4294967259
>> [ 8599.117759] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.906310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
>> [ 8623.909333] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.909335] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204274216+8) 4294967259
>> [ 8623.909624] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967260
>> [ 8623.910846] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967261
>> [ 8623.913364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967262
>> [ 8625.066563] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204274216+8) 4294967263
>> [ 8651.338552] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204280104+8) 4294967267
>> [ 8693.930170] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967266
>> [ 8693.939647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967265
>> [ 8693.949139] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967264
>> [ 8693.958637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967263
>> [ 8693.968135] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967262
>> [ 8693.977622] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967261
>> [ 8693.987106] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967260
>> [ 8693.996589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204280104+8) 4294967259
>> [ 8703.470915] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967260
>> [ 8703.470931] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967261
>> [ 8703.470985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967264
>> [ 8703.470977] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967263
>> [ 8703.470957] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967262
>> [ 8703.471000] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967265
>> [ 8703.471037] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204283624+8) 4294967266
>> [ 8771.557361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967265
>> [ 8771.566858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967264
>> [ 8771.576344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967263
>> [ 8771.585830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967262
>> [ 8771.595316] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967261
>> [ 8771.604797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967260
>> [ 8771.614273] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28204283624+8) 4294967259
>> [ 8772.929338] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967260
>> [ 8772.929454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967261
>> [ 8772.929545] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967262
>> [ 8772.929764] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967263
>> [ 8772.929816] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967264
>> [ 8772.929850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204289768+8) 4294967265
>> [ 8847.837534] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967263
>> [ 8847.837548] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967264
>> [ 8847.843801] __add_stripe_bio: md127: start ff2721beec8c2fa0(28204297128+8) 4294967265
>> [ 8945.958057] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967260
>> [ 8945.958072] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967261
>> [ 8945.958101] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967262
>> [ 8945.958105] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967263
>> [ 8945.958112] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967264
>> [ 8945.958137] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472893352+8) 4294967265
>> [ 8991.941073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967264
>> [ 8991.941075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967263
>> [ 8991.941076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967262
>> [ 8991.941077] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967261
>> [ 8991.941078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967260
>> [ 8991.941080] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472893352+8) 4294967259
>> [ 9036.005328] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967260
>> [ 9036.005409] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967261
>> [ 9036.006275] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967262
>> [ 9036.006348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967263
>> [ 9036.006436] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967264
>> [ 9036.006517] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472899304+8) 4294967265
>> [ 9089.090686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967264
>> [ 9089.100181] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967263
>> [ 9089.109670] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967262
>> [ 9089.119157] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967261
>> [ 9089.128642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967260
>> [ 9089.138130] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472899304+8) 4294967259
>> [ 9120.298531] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967266
>> [ 9120.298540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967265
>> [ 9120.298547] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967264
>> [ 9120.298555] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967263
>> [ 9120.298568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967262
>> [ 9120.298581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967261
>> [ 9120.298599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967260
>> [ 9120.298613] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472904744+8) 4294967259
>> [ 9120.440293] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967260
>> [ 9120.440348] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967261
>> [ 9120.440387] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967262
>> [ 9120.440528] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967263
>> [ 9120.440553] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967264
>> [ 9120.440625] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967265
>> [ 9157.076832] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472909032+8) 4294967266
>> [ 9204.360610] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967265
>> [ 9204.370096] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967264
>> [ 9204.379585] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967263
>> [ 9204.389070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967262
>> [ 9204.398557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967261
>> [ 9204.408047] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967260
>> [ 9204.417534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472909032+8) 4294967259
>> [ 9235.036854] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967260
>> [ 9235.036941] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967261
>> [ 9235.036985] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967262
>> [ 9235.037076] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967263
>> [ 9235.037103] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967264
>> [ 9235.037202] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472915624+8) 4294967265
>> [ 9326.601083] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967260
>> [ 9326.601219] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967261
>> [ 9326.601490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967262
>> [ 9326.601519] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967263
>> [ 9326.601556] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967264
>> [ 9326.601644] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967265
>> [ 9326.601736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967266
>> [ 9326.607310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472925480+8) 4294967267
>> [ 9358.046046] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967266
>> [ 9358.055533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967265
>> [ 9358.065019] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967264
>> [ 9358.074517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967263
>> [ 9358.084002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967262
>> [ 9358.093495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967261
>> [ 9358.102997] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967260
>> [ 9358.112485] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472925480+8) 4294967259
>> [ 9361.800927] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967260
>> [ 9361.801104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967261
>> [ 9361.801282] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967262
>> [ 9361.801484] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967263
>> [ 9361.801527] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967264
>> [ 9361.801620] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967265
>> [ 9361.801653] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472927848+8) 4294967266
>> [ 9430.438322] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967265
>> [ 9430.447809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967264
>> [ 9430.457294] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967263
>> [ 9430.466782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967262
>> [ 9430.476277] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967261
>> [ 9430.485766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967260
>> [ 9430.495250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472927848+8) 4294967259
>> [ 9444.781947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967260
>> [ 9444.781963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967261
>> [ 9444.782418] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967262
>> [ 9444.782605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967263
>> [ 9444.782662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967264
>> [ 9444.782718] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967265
>> [ 9444.782857] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967266
>> [ 9444.782963] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472932456+8) 4294967267
>> [ 9460.949714] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967266
>> [ 9461.002252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967265
>> [ 9461.054806] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967264
>> [ 9461.107370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967263
>> [ 9461.159913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967262
>> [ 9461.212473] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967261
>> [ 9461.265014] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967260
>> [ 9461.317558] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472932456+8) 4294967259
>> [ 9472.880989] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967260
>> [ 9472.881022] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967262
>> [ 9472.881013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967261
>> [ 9472.881466] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967263
>> [ 9472.881585] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967264
>> [ 9472.881617] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967265
>> [ 9472.881636] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967266
>> [ 9473.230016] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472934568+8) 4294967267
>> [ 9484.525992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967266
>> [ 9484.607866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967265
>> [ 9484.690643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967264
>> [ 9484.773393] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967263
>> [ 9484.856144] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967262
>> [ 9484.938897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967261
>> [ 9485.021665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967260
>> [ 9485.104419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472934568+8) 4294967259
>> [ 9565.878800] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967265
>> [ 9565.888283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967264
>> [ 9565.897767] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967263
>> [ 9565.907250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967262
>> [ 9565.916734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967261
>> [ 9565.926219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967260
>> [ 9565.935703] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472938920+8) 4294967259
>> [ 9569.109038] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032745832+8) 4294967260
>> [ 9613.943060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032745832+8) 4294967259
>> [ 9625.083909] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967264
>> [ 9625.083911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967263
>> [ 9625.083913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967262
>> [ 9625.083914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967261
>> [ 9625.083916] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967260
>> [ 9625.083917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472948456+8) 4294967259
>> [ 9686.963155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8599505896+8) 4294967259
>> [ 9706.444605] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967260
>> [ 9706.444608] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967261
>> [ 9706.444612] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967262
>> [ 9706.444615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967263
>> [ 9706.444671] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967264
>> [ 9706.462013] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967265
>> [ 9709.460551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472960104+8) 4294967266
>> [ 9713.748452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967266
>> [ 9713.748682] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472964136+8) 4294967267
>> [ 9713.753732] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967266
>> [ 9713.753735] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967265
>> [ 9713.753736] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967264
>> [ 9713.753737] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967263
>> [ 9713.753739] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967262
>> [ 9713.753740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967261
>> [ 9713.753741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967260
>> [ 9713.753743] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472964136+8) 4294967259
>> [ 9742.956450] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967260
>> [ 9742.956502] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967261
>> [ 9742.956503] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967262
>> [ 9742.956551] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967263
>> [ 9743.152042] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967264
>> [ 9756.828598] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472967144+8) 4294967265
>> [ 9811.500567] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967264
>> [ 9811.510052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967263
>> [ 9811.519534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967262
>> [ 9811.529017] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967261
>> [ 9811.538503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967260
>> [ 9811.547984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472967144+8) 4294967259
>> [ 9816.207521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967266
>> [ 9816.207576] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967265
>> [ 9816.207628] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967264
>> [ 9816.207682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967263
>> [ 9816.207747] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967262
>> [ 9816.207801] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967261
>> [ 9816.207859] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967260
>> [ 9816.207914] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472971752+8) 4294967259
>> [ 9816.211655] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967260
>> [ 9816.211658] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967261
>> [ 9816.211787] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967262
>> [ 9816.211882] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967263
>> [ 9816.211901] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967264
>> [ 9816.211947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472975272+8) 4294967265
>> [ 9919.228920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967264
>> [ 9919.238395] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967263
>> [ 9919.247877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967262
>> [ 9919.257360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967261
>> [ 9919.266845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967260
>> [ 9919.276332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472975272+8) 4294967259
>> [ 9921.581043] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967261
>> [ 9921.580965] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967260
>> [ 9921.581099] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967262
>> [ 9921.581185] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967263
>> [ 9921.581304] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967264
>> [ 9921.581341] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967265
>> [ 9921.581364] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967266
>> [ 9921.581365] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472984680+8) 4294967267
>> [ 9921.583872] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967266
>> [ 9921.583882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967265
>> [ 9921.583897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967264
>> [ 9921.583913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967263
>> [ 9921.583929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967262
>> [ 9921.583945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967261
>> [ 9921.583960] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967260
>> [ 9921.583977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472984680+8) 4294967259
>> [ 9951.448268] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314088+8) 4294967260
>> [ 9986.682431] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967262
>> [ 9986.682540] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967263
>> [ 9986.682661] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967264
>> [ 9986.687019] __add_stripe_bio: md127: start ff2721beec8c2fa0(28472994664+8) 4294967265
>> [10026.057977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967264
>> [10026.057980] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967263
>> [10026.057982] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967262
>> [10026.057984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967261
>> [10026.057986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967260
>> [10026.057987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28472994664+8) 4294967259
>> [10026.060967] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967260
>> [10026.061269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967261
>> [10026.061642] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967262
>> [10026.061728] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967264
>> [10026.061715] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967263
>> [10026.061745] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967265
>> [10026.061790] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967266
>> [10026.061809] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473000424+8) 4294967267
>> [10055.918459] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967266
>> [10056.153717] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967265
>> [10056.388985] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967264
>> [10056.624244] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967263
>> [10056.859507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967262
>> [10057.094768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967261
>> [10057.330043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967260
>> [10057.565339] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473000424+8) 4294967259
>> [10060.361434] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967260
>> [10060.361487] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967261
>> [10060.361634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967262
>> [10060.361709] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967263
>> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967264
>> [10060.361710] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967265
>> [10060.361723] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473001448+8) 4294967266
>> [10108.302805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967265
>> [10108.302808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967264
>> [10108.302810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967263
>> [10108.302812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967262
>> [10108.302814] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967261
>> [10108.302816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967260
>> [10108.302818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473001448+8) 4294967259
>> [10108.307331] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967260
>> [10108.307515] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967261
>> [10108.307665] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967263
>> [10108.307647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967262
>> [10108.308273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967264
>> [10108.308454] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967266
>> [10108.308426] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967265
>> [10108.308720] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473008296+8) 4294967267
>> [10138.300402] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967266
>> [10138.516650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967265
>> [10138.732928] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967264
>> [10138.949166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967263
>> [10139.165431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967262
>> [10139.381678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967261
>> [10139.597964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967260
>> [10139.814237] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473008296+8) 4294967259
>> [10144.088915] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746728+8) 4294967260
>> [10188.320954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746728+8) 4294967259
>> [10196.658979] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967260
>> [10196.659207] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967261
>> [10196.659343] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967262
>> [10196.659430] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967263
>> [10196.659439] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967264
>> [10196.660163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967265
>> [10196.660183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473013608+8) 4294967266
>> [10279.470741] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967265
>> [10279.480231] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967264
>> [10279.489723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967263
>> [10279.499214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967262
>> [10279.508709] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967261
>> [10279.518206] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967260
>> [10279.527693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28473013608+8) 4294967259
>> [10286.896922] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032746984+8) 4294967260
>> [10304.464308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032746984+8) 4294967259
>> [10305.667186] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967260
>> [10305.667269] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967261
>> [10305.667270] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967262
>> [10305.667429] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967263
>> [10305.667512] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967264
>> [10305.667590] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967265
>> [10305.667639] __add_stripe_bio: md127: start ff2721beec8c2fa0(28473022056+8) 4294967266
>> [10334.751820] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
>> [10415.904902] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
>> [10415.960311] __add_stripe_bio: md127: start ff2721beec8c2fa0(28457314152+8) 4294967260
>> [10423.280595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28457314152+8) 4294967259
>> [10439.777461] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967263
>> [10439.777592] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967264
>> [10439.777647] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967265
>> [10439.777786] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741281576+8) 4294967266
>> [10497.698903] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967265
>> [10497.698905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967264
>> [10497.698907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967263
>> [10497.698908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967262
>> [10497.698910] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967261
>> [10497.698911] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967260
>> [10497.698912] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741281576+8) 4294967259
>> [10497.701113] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967260
>> [10497.701118] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967261
>> [10497.701183] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967262
>> [10497.701490] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967263
>> [10497.701908] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967264
>> [10497.702132] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741287848+8) 4294967265
>> [10593.723273] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967264
>> [10593.723280] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967265
>> [10593.723381] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741293480+8) 4294967266
>> [10681.179411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967265
>> [10681.188893] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967264
>> [10681.198368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967263
>> [10681.207851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967262
>> [10681.217350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967261
>> [10681.226842] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967260
>> [10681.236340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741293480+8) 4294967259
>> [10689.151359] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967260
>> [10689.151633] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967261
>> [10689.151649] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967262
>> [10689.151700] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967263
>> [10689.151823] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967264
>> [10689.152267] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967265
>> [10689.152370] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741301800+8) 4294967266
>> [10765.465443] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967260
>> [10765.465539] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967261
>> [10765.465942] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967262
>> [10765.466073] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967263
>> [10765.471339] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967264
>> [10765.471352] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967265
>> [10765.471615] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741309544+8) 4294967266
>> [10810.806942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967265
>> [10810.816430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967264
>> [10810.825920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967263
>> [10810.835410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967262
>> [10810.844892] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967261
>> [10810.854379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967260
>> [10810.863866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741309544+8) 4294967259
>> [10829.079850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741313832+8) 4294967260
>> [10849.794350] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
>> [10918.134642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967264
>> [10918.144122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967263
>> [10918.153605] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967262
>> [10918.163087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967261
>> [10918.172569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967260
>> [10918.182053] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741313832+8) 4294967259
>> [10925.672507] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967260
>> [10925.672673] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967262
>> [10925.672533] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967261
>> [10925.672788] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967263
>> [10925.672958] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967264
>> [10925.672978] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967265
>> [10925.673524] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967266
>> [10925.673721] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741320744+8) 4294967267
>> [10938.064956] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967266
>> [10938.263167] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967265
>> [10938.461362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967264
>> [10938.659537] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967263
>> [10938.857718] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967262
>> [10939.055907] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967261
>> [10939.254110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967260
>> [10939.452289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741320744+8) 4294967259
>> [10942.625740] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967260
>> [10942.625791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967262
>> [10942.625850] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967264
>> [10942.625849] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967263
>> [10942.625789] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967261
>> [10942.626403] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741322600+8) 4294967265
>> [11020.726643] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967264
>> [11020.726645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967263
>> [11020.726646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967262
>> [11020.726648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967261
>> [11020.726649] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967260
>> [11020.726651] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741322600+8) 4294967259
>> [11045.762697] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967262
>> [11045.763802] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967263
>> [11045.763966] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967264
>> [11045.764660] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741328104+8) 4294967265
>> [11072.427987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967266
>> [11072.429693] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967265
>> [11072.429971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967264
>> [11072.430020] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967263
>> [11072.430076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967262
>> [11072.430127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967261
>> [11072.430180] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967260
>> [11072.430234] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741342760+8) 4294967259
>> [11103.176662] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967260
>> [11103.176760] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967261
>> [11103.176914] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967262
>> [11103.176947] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967263
>> [11103.177351] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967264
>> [11103.243210] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741345192+8) 4294967265
>> [11197.368568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967264
>> [11197.378067] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967263
>> [11197.387571] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967262
>> [11197.397061] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967261
>> [11197.406551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967260
>> [11197.416027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741345192+8) 4294967259
>> [11213.302960] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741352808+8) 4294967260
>> [11214.005987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967265
>> [11214.005989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967264
>> [11214.005990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967263
>> [11214.005991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967262
>> [11214.005992] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967261
>> [11214.005994] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967260
>> [11214.005995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741352808+8) 4294967259
>> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967260
>> [11251.017225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967261
>> [11251.017233] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967263
>> [11251.017238] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967266
>> [11251.017236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967265
>> [11251.017234] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967264
>> [11251.017231] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967262
>> [11269.980634] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741359720+8) 4294967267
>> [11289.588526] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967266
>> [11289.598008] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967265
>> [11289.607492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967264
>> [11289.616977] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967263
>> [11289.626462] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967262
>> [11289.635951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967261
>> [11289.645439] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967260
>> [11289.654929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741359720+8) 4294967259
>> [11289.955406] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967260
>> [11289.955748] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967261
>> [11289.955828] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967262
>> [11289.956014] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967263
>> [11310.687017] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741364520+8) 4294967264
>> [11344.753308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967266
>> [11344.753344] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967265
>> [11344.753381] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967264
>> [11344.753417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967263
>> [11344.753453] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967262
>> [11344.753488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967261
>> [11344.753524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967260
>> [11344.753559] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741371944+8) 4294967259
>> [11372.057310] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967260
>> [11372.057453] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967261
>> [11372.058126] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967262
>> [11372.058225] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967263
>> [11372.058236] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967264
>> [11372.058445] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741374568+8) 4294967265
>> [11477.692568] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967264
>> [11477.702057] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967263
>> [11477.711549] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967262
>> [11477.721041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967261
>> [11477.730527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967260
>> [11477.740015] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741374568+8) 4294967259
>> [11484.738964] __add_stripe_bio: md127: start ff2721beec8c2fa0(28725142504+8) 4294967260
>> [11516.590602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28725142504+8) 4294967259
>> [11541.514580] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967260
>> [11541.514657] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967261
>> [11541.514736] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967262
>> [11541.514795] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967263
>> [11541.514937] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967264
>> [11541.514959] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967265
>> [11541.515020] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967266
>> [11541.515178] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741387368+8) 4294967267
>> [11589.245255] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967266
>> [11589.530527] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967265
>> [11589.815730] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967264
>> [11590.100882] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967263
>> [11590.386071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967262
>> [11590.671228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967261
>> [11590.956368] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967260
>> [11591.241504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741387368+8) 4294967259
>> [11665.205805] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967264
>> [11665.215300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967263
>> [11665.224797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967262
>> [11665.234283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967261
>> [11665.243770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967260
>> [11665.253265] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741392232+8) 4294967259
>> [11676.491376] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967260
>> [11676.491588] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967261
>> [11676.491637] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967262
>> [11676.491798] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967263
>> [11676.492010] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741400360+8) 4294967264
>> [11785.703215] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967260
>> [11787.558791] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967261
>> [11787.558892] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967262
>> [11791.614199] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967263
>> [11793.388452] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967264
>> [11795.421600] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967265
>> [11795.422104] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967266
>> [11795.424300] __add_stripe_bio: md127: start ff2721beec8c2fa0(28741406824+8) 4294967267
>> [11795.426502] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967266
>> [11795.426503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967265
>> [11795.426504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967264
>> [11795.426505] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967263
>> [11795.426506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967262
>> [11795.426507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967261
>> [11795.426509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967260
>> [11795.426510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28741406824+8) 4294967259
>> [11871.476819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967260
>> [11871.486321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009321576+8) 4294967259
>> [11919.615037] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967260
>> [11919.615114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967261
>> [11919.615374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967262
>> [11919.615395] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967263
>> [11919.615491] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967264
>> [11928.774840] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009330472+8) 4294967267
>> [12000.001506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967264
>> [12000.001507] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967263
>> [12000.001509] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967262
>> [12000.001510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967261
>> [12000.001512] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967260
>> [12000.001514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009336936+8) 4294967259
>> [12033.086368] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967260
>> [12033.086440] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967261
>> [12033.086702] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967262
>> [12033.086790] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967263
>> [12033.087023] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009343400+8) 4294967264
>> [12071.801701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967262
>> [12071.801733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967264
>> [12071.801727] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967263
>> [12071.801859] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967265
>> [12071.801934] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009349352+8) 4294967266
>> [12147.838099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967265
>> [12147.838104] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967264
>> [12147.838108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967263
>> [12147.838113] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967262
>> [12147.838117] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967261
>> [12147.838122] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967260
>> [12147.838131] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009349352+8) 4294967259
>> [12161.825218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967260
>> [12171.278213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967261
>> [12171.278308] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967262
>> [12171.278349] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967263
>> [12171.278418] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967264
>> [12171.278481] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009356712+8) 4294967265
>> [12225.694791] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967264
>> [12225.704279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967263
>> [12225.713774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967262
>> [12225.723264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967261
>> [12225.732759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967260
>> [12225.742248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009356712+8) 4294967259
>> [12241.217982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967260
>> [12241.218022] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967261
>> [12241.218156] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967262
>> [12241.218291] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967263
>> [12241.218712] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967264
>> [12241.225590] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009362344+8) 4294967265
>> [12324.241931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967264
>> [12324.251421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967263
>> [12324.260905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967262
>> [12324.270390] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967261
>> [12324.279874] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967260
>> [12324.289362] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009362344+8) 4294967259
>> [12330.627283] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967261
>> [12330.627282] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967260
>> [12330.627356] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967262
>> [12330.627452] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967263
>> [12330.627459] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967264
>> [12330.627476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009368808+8) 4294967265
>> [12360.643250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967265
>> [12360.643251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967264
>> [12360.643253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967263
>> [12360.643254] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967262
>> [12360.643256] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967261
>> [12360.643257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967260
>> [12360.643258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009368808+8) 4294967259
>> [12412.055085] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967262
>> [12412.055185] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967263
>> [12412.055346] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967264
>> [12412.055358] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009379560+8) 4294967265
>> [12502.916463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967264
>> [12502.925955] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967263
>> [12502.935437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967262
>> [12502.944921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967261
>> [12502.954403] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967260
>> [12502.963891] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009379560+8) 4294967259
>> [12508.172163] __add_stripe_bio: md127: start ff2721beec8c2fa0(28994103208+8) 4294967260
>> [12569.394168] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(28994103208+8) 4294967259
>> [12669.806358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967264
>> [12669.815852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967263
>> [12669.825349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967262
>> [12669.834848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967261
>> [12669.844337] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967260
>> [12669.853824] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009393000+8) 4294967259
>> [12681.132403] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967260
>> [12681.132623] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967261
>> [12681.132903] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967262
>> [12681.133118] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967263
>> [12681.133360] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967264
>> [12681.133472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967265
>> [12681.133687] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009401128+8) 4294967266
>> [12756.131024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967265
>> [12756.140520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967264
>> [12756.150010] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967263
>> [12756.159496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967262
>> [12756.168984] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967261
>> [12756.178470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967260
>> [12756.187958] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009401128+8) 4294967259
>> [12761.752380] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967260
>> [12761.752555] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967261
>> [12761.752566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967262
>> [12761.752717] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967263
>> [12761.752864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967264
>> [12761.753575] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009406632+8) 4294967265
>> [12841.108192] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967264
>> [12841.117686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967263
>> [12841.127177] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967262
>> [12841.136667] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967261
>> [12841.146150] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967260
>> [12841.155642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009406632+8) 4294967259
>> [12854.788520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967262
>> [12854.789006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967263
>> [12854.790480] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967264
>> [12854.792345] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967265
>> [12854.792371] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967266
>> [12854.792648] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009414440+8) 4294967267
>> [12854.796137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967266
>> [12854.796140] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967265
>> [12854.796143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967264
>> [12854.796145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967263
>> [12854.796147] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967262
>> [12854.796149] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967261
>> [12854.796151] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967260
>> [12854.796152] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009414440+8) 4294967259
>> [12979.496382] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967265
>> [12979.505867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967264
>> [12979.515357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967263
>> [12979.524845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967262
>> [12979.534338] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967261
>> [12979.543825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967260
>> [12979.553315] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009416680+8) 4294967259
>> [12987.839356] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032747304+8) 4294967260
>> [13019.716541] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032747304+8) 4294967259
>> [13023.790667] __add_stripe_bio: md127: start ff2721beec8c2fa0(8053065000+8) 4294967260
>> [13166.159630] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(8053065000+8) 4294967259
>> [13172.153701] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967260
>> [13172.153856] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967261
>> [13172.154183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967262
>> [13172.154307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967263
>> [13172.154320] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967264
>> [13172.154325] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967265
>> [13172.154327] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967266
>> [13172.154540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009436840+8) 4294967267
>> [13184.154929] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967266
>> [13184.395309] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967265
>> [13184.635656] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967264
>> [13184.876002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967263
>> [13185.116361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967262
>> [13185.356722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967261
>> [13185.597099] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967260
>> [13185.837445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009436840+8) 4294967259
>> [13200.462739] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967260
>> [13200.463373] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967261
>> [13200.463433] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967262
>> [13200.463686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967263
>> [13200.463718] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967264
>> [13200.463750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967265
>> [13200.463801] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967266
>> [13200.463824] __add_stripe_bio: md127: start ff2721beec8c2fa0(29009439016+8) 4294967267
>> [13218.832166] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967266
>> [13218.964853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967265
>> [13219.097514] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967264
>> [13219.230197] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967263
>> [13219.362875] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967262
>> [13219.495536] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967261
>> [13219.628221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967260
>> [13219.760852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009439016+8) 4294967259
>> [13284.342646] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967264
>> [13284.352133] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967263
>> [13284.361618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967262
>> [13284.371115] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967261
>> [13284.380606] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967260
>> [13284.390092] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29009440424+8) 4294967259
>> [13290.205839] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312350184+8) 4294967260
>> [13328.994695] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312350184+8) 4294967259
>> [13358.709396] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967261
>> [13358.709379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967260
>> [13358.709414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967262
>> [13358.709435] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967263
>> [13358.709475] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967264
>> [13362.737444] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967265
>> [13362.737666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324623208+8) 4294967266
>> [13386.563843] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967260
>> [13386.563968] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967261
>> [13386.564092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967262
>> [13386.564227] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967263
>> [13386.564297] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967264
>> [13386.564364] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967265
>> [13386.564532] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324629288+8) 4294967266
>> [13425.701678] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967265
>> [13425.701680] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967264
>> [13425.701681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967263
>> [13425.701682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967262
>> [13425.701684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967261
>> [13425.701686] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967260
>> [13425.701688] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324629288+8) 4294967259
>> [13491.009481] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967264
>> [13491.018974] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967263
>> [13491.028463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967262
>> [13491.037945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967261
>> [13491.047430] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967260
>> [13491.056918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545812584+8) 4294967259
>> [13493.360976] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967260
>> [13493.361476] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967261
>> [13493.361592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967262
>> [13493.366880] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967263
>> [13493.367183] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967264
>> [13493.367399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967265
>> [13493.367635] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967266
>> [13493.367706] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545815400+8) 4294967267
>> [13524.736988] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967266
>> [13524.746476] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967265
>> [13524.755962] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967264
>> [13524.765450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967263
>> [13524.774943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967262
>> [13524.784436] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967261
>> [13524.793931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967260
>> [13524.803422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545815400+8) 4294967259
>> [13529.444566] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967260
>> [13529.444628] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967261
>> [13529.445249] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967262
>> [13529.445330] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967264
>> [13529.445307] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967263
>> [13529.445578] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967265
>> [13529.445594] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545816296+8) 4294967266
>> [13568.836756] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967265
>> [13568.836757] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967264
>> [13568.836759] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967263
>> [13568.836766] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967262
>> [13568.836768] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967261
>> [13568.836769] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967260
>> [13568.836770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545816296+8) 4294967259
>> [13595.486041] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967260
>> [13595.486114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967261
>> [13595.486450] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967262
>> [13595.486544] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967263
>> [13595.486756] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967264
>> [13595.486807] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967265
>> [13595.487127] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545821480+8) 4294967266
>> [13684.444417] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967265
>> [13684.453904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967264
>> [13684.463391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967263
>> [13684.472878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967262
>> [13684.482361] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967261
>> [13684.491850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967260
>> [13684.501340] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545821480+8) 4294967259
>> [13686.643818] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967260
>> [13686.643853] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967261
>> [13686.643948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967262
>> [13686.643993] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967263
>> [13686.644144] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967264
>> [13686.644406] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967265
>> [13686.644525] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324647016+8) 4294967266
>> [13734.793297] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967265
>> [13734.809137] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967264
>> [13744.445102] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967263
>> [13744.454593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967262
>> [13744.464083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967261
>> [13744.473560] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967260
>> [13744.483051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324647016+8) 4294967259
>> [13820.332081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967264
>> [13820.341572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967263
>> [13820.351058] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967262
>> [13820.360550] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967261
>> [13820.370039] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967260
>> [13820.379525] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324654440+8) 4294967259
>> [13828.887574] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967260
>> [13828.888698] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967261
>> [13828.888811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967262
>> [13828.888838] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967263
>> [13828.888877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967264
>> [13828.889012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967265
>> [13828.889087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324662056+8) 4294967266
>> [13939.263249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967265
>> [13939.272744] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967264
>> [13939.282233] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967263
>> [13939.291721] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967262
>> [13939.301203] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967261
>> [13939.310687] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967260
>> [13939.320173] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324662056+8) 4294967259
>> [13949.622362] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032748904+8) 4294967260
>> [13975.927477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324672360+8) 4294967267
>> [13975.932834] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967266
>> [13975.932837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967265
>> [13975.932840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967264
>> [13975.932843] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967263
>> [13975.932845] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967262
>> [13975.932848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967261
>> [13975.932851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967260
>> [13975.932853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324672360+8) 4294967259
>> [14002.581980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967260
>> [14002.582096] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967261
>> [14002.582385] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967262
>> [14002.582558] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967263
>> [14002.582619] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967264
>> [14002.582667] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324674920+8) 4294967265
>> [14119.130023] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967264
>> [14119.139513] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967263
>> [14119.148996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967262
>> [14119.158486] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967261
>> [14119.167972] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967260
>> [14119.177456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324674920+8) 4294967259
>> [14124.072930] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967260
>> [14124.073027] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967262
>> [14124.073025] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967261
>> [14124.073126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967263
>> [14124.073129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967264
>> [14124.073379] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324682216+8) 4294967265
>> [14210.467642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967264
>> [14210.467644] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967263
>> [14210.467645] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967262
>> [14210.467647] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967261
>> [14210.467650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967260
>> [14210.467652] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324682216+8) 4294967259
>> [14239.699953] __add_stripe_bio: md127: start ff2721beec8c2fa0(29312349928+8) 4294967260
>> [14343.004904] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29312349928+8) 4294967259
>> [14351.607882] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967260
>> [14351.607891] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967262
>> [14351.607979] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967263
>> [14351.607884] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967261
>> [14351.608559] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967264
>> [14351.608811] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967265
>> [14351.691948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324697960+8) 4294967266
>> [14460.596492] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967265
>> [14460.596493] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967264
>> [14460.596494] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967263
>> [14460.596495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967262
>> [14460.596496] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967261
>> [14460.596497] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967260
>> [14460.596499] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324697960+8) 4294967259
>> [14460.598694] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967260
>> [14460.598948] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967261
>> [14460.599105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967262
>> [14460.599222] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967263
>> [14460.599261] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967264
>> [14460.599376] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324706536+8) 4294967265
>> [14541.632593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967264
>> [14541.642093] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967263
>> [14541.651581] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967262
>> [14541.661069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967261
>> [14541.670554] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967260
>> [14541.680042] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324706536+8) 4294967259
>> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967260
>> [14547.546543] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967261
>> [14547.547058] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967262
>> [14547.547167] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967263
>> [14547.547468] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967264
>> [14547.547552] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967265
>> [14547.547661] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967266
>> [14547.547697] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324713128+8) 4294967267
>> [14575.315268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967266
>> [14575.596007] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967265
>> [14575.876716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967264
>> [14576.157450] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967263
>> [14576.438196] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967262
>> [14576.718943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967261
>> [14576.999682] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967260
>> [14577.280400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324713128+8) 4294967259
>> [14582.737304] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749480+8) 4294967260
>> [14636.539506] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749480+8) 4294967259
>> [14638.880107] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032749864+8) 4294967260
>> [14657.493830] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032749864+8) 4294967259
>> [14675.212921] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967260
>> [14675.213033] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967261
>> [14675.213091] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967262
>> [14675.213105] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967263
>> [14675.213429] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967264
>> [14675.213477] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967265
>> [14675.213877] __add_stripe_bio: md127: start ff2721beec8c2fa0(29324721960+8) 4294967266
>> [14737.052888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967265
>> [14737.062372] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967264
>> [14737.071858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967263
>> [14737.081341] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967262
>> [14737.090827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967261
>> [14737.100311] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967260
>> [14737.109810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29324721960+8) 4294967259
>> [14743.382147] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967260
>> [14743.382495] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967261
>> [14743.382520] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967262
>> [14743.382595] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967263
>> [14743.382607] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967264
>> [14743.382646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967265
>> [14743.382683] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545824552+8) 4294967266
>> [14836.115483] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967265
>> [14836.124971] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967264
>> [14836.134457] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967263
>> [14836.143943] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967262
>> [14836.153427] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967261
>> [14836.162908] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967260
>> [14836.172392] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545824552+8) 4294967259
>> [14867.977410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967265
>> [14867.977411] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967264
>> [14867.977413] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967263
>> [14867.977414] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967262
>> [14867.977416] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967261
>> [14867.977418] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967260
>> [14867.977419] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545830952+8) 4294967259
>> [14867.982046] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967260
>> [14867.982289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967261
>> [14867.982319] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967262
>> [14867.982377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967263
>> [14867.982398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967264
>> [14867.982409] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545836776+8) 4294967265
>> [14906.532978] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967264
>> [14906.532983] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967263
>> [14906.532987] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967262
>> [14906.532991] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967261
>> [14906.532995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967260
>> [14906.533001] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545845096+8) 4294967259
>> [14906.537331] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967260
>> [14906.537374] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967261
>> [14906.537389] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967262
>> [14906.537397] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967263
>> [14906.537413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967264
>> [14906.537417] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967265
>> [14906.538256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545849064+8) 4294967266
>> [15000.553174] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967265
>> [15000.562663] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967264
>> [15000.572145] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967263
>> [15000.581619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967262
>> [15000.591109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967261
>> [15000.600591] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967260
>> [15000.610079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545849064+8) 4294967259
>> [15038.840534] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967265
>> [15038.840569] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967266
>> [15038.918437] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545864360+8) 4294967267
>> [15072.110477] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967266
>> [15072.119963] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967265
>> [15072.129445] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967264
>> [15072.138931] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967263
>> [15072.148422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967262
>> [15072.157919] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967261
>> [15072.167410] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967260
>> [15072.176895] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545864360+8) 4294967259
>> [15094.337077] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967260
>> [15094.337106] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967262
>> [15094.337126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967264
>> [15094.337145] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967265
>> [15094.337125] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967263
>> [15094.337095] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967261
>> [15094.337202] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967266
>> [15094.337864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545868328+8) 4294967267
>> [15109.185642] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967266
>> [15109.231325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967265
>> [15109.276996] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967264
>> [15109.322684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967263
>> [15109.368357] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967262
>> [15109.414059] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967261
>> [15109.459734] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967260
>> [15109.505426] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545868328+8) 4294967259
>> [15109.940853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967266
>> [15109.940854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967265
>> [15109.940856] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967264
>> [15109.940857] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967263
>> [15109.940858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967262
>> [15109.940860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967261
>> [15109.940862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967260
>> [15109.940863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545870888+8) 4294967259
>> [15109.943869] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032750824+8) 4294967260
>> [15109.946364] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032750824+8) 4294967259
>> [15143.938195] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967260
>> [15143.938199] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967261
>> [15143.938502] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967262
>> [15143.938860] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967263
>> [15143.939079] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967265
>> [15143.939055] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545873384+8) 4294967264
>> [15194.321503] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967264
>> [15194.330986] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967263
>> [15194.340472] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967262
>> [15194.349954] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967261
>> [15194.359431] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967260
>> [15194.368920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545873384+8) 4294967259
>> [15202.981864] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967260
>> [15202.982006] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967262
>> [15202.982049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967263
>> [15202.981938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967261
>> [15202.982750] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545880552+8) 4294967264
>> [15284.694932] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967263
>> [15284.704421] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967262
>> [15284.713913] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967261
>> [15284.723400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967260
>> [15284.732888] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545880552+8) 4294967259
>> [15293.005593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967264
>> [15293.005596] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967263
>> [15293.005597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967262
>> [15293.005599] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967261
>> [15293.005602] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967260
>> [15293.005603] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29531458344+8) 4294967259
>> [15293.008211] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032751272+8) 4294967260
>> [15293.009308] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032751272+8) 4294967259
>> [15361.123918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967265
>> [15361.133405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967264
>> [15361.142894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967263
>> [15361.152384] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967262
>> [15361.161896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967261
>> [15361.171380] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967260
>> [15361.180864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545683432+8) 4294967259
>> [15364.021538] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967260
>> [15364.021666] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967261
>> [15364.022045] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967262
>> [15364.022114] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967264
>> [15364.022092] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967263
>> [15364.022137] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967265
>> [15364.022367] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967266
>> [15364.022522] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545884008+8) 4294967267
>> [15374.355722] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967266
>> [15374.554797] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967265
>> [15374.753840] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967264
>> [15374.952890] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967263
>> [15375.151938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967262
>> [15375.351041] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967261
>> [15375.550089] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967260
>> [15375.749178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545884008+8) 4294967259
>> [15475.090773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967263
>> [15475.090778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967262
>> [15475.090782] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967261
>> [15475.090790] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967260
>> [15475.090795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545887592+8) 4294967259
>> [15512.462087] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530332008+8) 4294967260
>> [15605.182723] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29530332008+8) 4294967259
>> [15610.777819] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967260
>> [15610.778112] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967261
>> [15610.778158] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967262
>> [15610.778540] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967263
>> [15610.778741] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967264
>> [15610.778768] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967265
>> [15610.779126] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967266
>> [15610.779136] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545906216+8) 4294967267
>> [15625.092720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967266
>> [15625.291753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967265
>> [15625.490794] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967264
>> [15625.689869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967263
>> [15625.888922] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967262
>> [15626.087961] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967261
>> [15626.287002] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967260
>> [15626.486060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545906216+8) 4294967259
>> [15631.421893] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967260
>> [15631.422458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967261
>> [15631.422563] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967262
>> [15631.422804] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967263
>> [15631.422822] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967264
>> [15631.422885] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967265
>> [15649.575279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545907560+8) 4294967266
>> [15684.592107] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967265
>> [15684.601597] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967264
>> [15684.611088] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967263
>> [15684.620575] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967262
>> [15684.630060] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967261
>> [15684.639545] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967260
>> [15684.649027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545907560+8) 4294967259
>> [15690.850975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967260
>> [15690.851550] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967261
>> [15690.851939] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967262
>> [15690.852053] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967263
>> [15690.852178] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967264
>> [15690.852281] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967265
>> [15690.852313] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545911720+8) 4294967266
>> [15735.620596] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967260
>> [15735.620646] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967261
>> [15735.620685] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967263
>> [15735.620686] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967264
>> [15735.620652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967262
>> [15735.621500] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545921320+8) 4294967265
>> [15816.149770] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967264
>> [15816.149772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967263
>> [15816.149774] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967262
>> [15816.149776] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967261
>> [15816.149778] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967260
>> [15816.149780] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29545921320+8) 4294967259
>> [15844.779214] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967260
>> [15844.779218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29545928872+8) 4294967261
>> [15934.023215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967265
>> [15934.032699] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967264
>> [15934.042184] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967263
>> [15934.051668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967262
>> [15934.061153] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967261
>> [15934.070639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967260
>> [15934.080128] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816766056+8) 4294967259
>> [15935.505198] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967260
>> [15935.505344] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967261
>> [15935.505684] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967262
>> [15951.816975] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967263
>> [15951.817541] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967264
>> [15951.817733] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816773224+8) 4294967265
>> [16048.370516] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967264
>> [16048.370517] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967263
>> [16048.370518] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967262
>> [16048.370520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967261
>> [16048.370521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967260
>> [16048.370522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816773224+8) 4294967259
>> [16048.372719] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967260
>> [16048.373083] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967261
>> [16048.373765] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967262
>> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967263
>> [16048.373913] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967264
>> [16048.373936] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967265
>> [16048.373938] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967266
>> [16048.373966] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816781032+8) 4294967267
>> [16099.152109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967266
>> [16099.438210] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967265
>> [16099.724249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967264
>> [16100.011268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967263
>> [16100.298239] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967262
>> [16100.585188] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967261
>> [16100.872155] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967260
>> [16101.159163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816781032+8) 4294967259
>> [16105.965912] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967260
>> [16105.966207] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967261
>> [16105.966279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967262
>> [16105.966399] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967263
>> [16105.966643] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967264
>> [16105.966725] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816783592+8) 4294967265
>> [16163.037754] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967260
>> [16163.037836] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967261
>> [16163.037883] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967262
>> [16163.038018] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967263
>> [16163.038537] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967264
>> [16163.039472] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967265
>> [16163.263169] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816815272+8) 4294967266
>> [16264.133495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967265
>> [16264.142981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967264
>> [16264.152464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967263
>> [16264.161951] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967262
>> [16264.171441] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967261
>> [16264.180935] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967260
>> [16264.190424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816815272+8) 4294967259
>> [16266.992432] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967260
>> [16266.992711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967261
>> [16266.993213] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967262
>> [16266.993398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967263
>> [16266.993458] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967264
>> [16290.695547] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967265
>> [16290.695556] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816820136+8) 4294967266
>> [16379.808266] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967265
>> [16379.817751] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967264
>> [16379.827236] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967263
>> [16379.836720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967262
>> [16379.846205] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967261
>> [16379.855697] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967260
>> [16379.865187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816820136+8) 4294967259
>> [16387.725516] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967260
>> [16387.725798] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967261
>> [16387.725917] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967262
>> [16387.726352] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967263
>> [16387.726414] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967264
>> [16387.726707] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967265
>> [16408.022981] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816798760+8) 4294967266
>> [16505.753964] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967265
>> [16505.763456] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967264
>> [16505.772952] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967263
>> [16505.782444] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967262
>> [16505.791921] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967261
>> [16505.801405] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967260
>> [16505.810889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816798760+8) 4294967259
>> [16515.475620] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967261
>> [16515.475613] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967260
>> [16515.475711] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967262
>> [16515.475844] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967263
>> [16515.475958] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967264
>> [16515.476377] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967265
>> [16515.476744] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967266
>> [16534.160611] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816802408+8) 4294967267
>> [16554.448056] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967266
>> [16554.457546] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967265
>> [16554.467022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967264
>> [16554.476504] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967263
>> [16554.485979] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967262
>> [16554.495463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967261
>> [16554.504953] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967260
>> [16554.514442] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816802408+8) 4294967259
>> [16555.835592] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967260
>> [16555.835847] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967261
>> [16555.836140] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967262
>> [16555.836279] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967263
>> [16576.074289] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967264
>> [16576.074652] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967265
>> [16576.075049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816805160+8) 4294967266
>> [16702.114836] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967265
>> [16702.124323] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967264
>> [16702.133808] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967263
>> [16702.143300] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967262
>> [16702.152799] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967261
>> [16702.162289] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967260
>> [16702.171773] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816805160+8) 4294967259
>> [16710.388044] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967260
>> [16710.388157] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967261
>> [16710.388256] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967262
>> [16710.388347] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967264
>> [16710.388390] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967265
>> [16710.388410] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967266
>> [16710.388285] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967263
>> [16710.389466] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816808616+8) 4294967267
>> [16726.681690] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967266
>> [16732.076989] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967265
>> [16732.227720] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967264
>> [16732.378463] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967263
>> [16732.529207] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967262
>> [16732.679930] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967261
>> [16732.830681] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967260
>> [16732.981424] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816808616+8) 4294967259
>> [16739.691982] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967261
>> [16739.691980] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967260
>> [16739.692049] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967262
>> [16739.692753] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967263
>> [16739.693143] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967264
>> [16739.693286] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967265
>> [16739.693391] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816835496+8) 4294967266
>> [16796.194339] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967260
>> [16796.194398] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967261
>> [16796.194422] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967262
>> [16796.194483] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967263
>> [16796.194946] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967264
>> [16796.195038] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967265
>> [16796.195499] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816839720+8) 4294967266
>> [16870.648462] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753896+8) 4294967260
>> [16898.893981] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753896+8) 4294967259
>> [16967.311945] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967265
>> [16967.321437] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967264
>> [16967.330924] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967263
>> [16967.340412] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967262
>> [16967.349896] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967261
>> [16967.359379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967260
>> [16967.368867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816848744+8) 4294967259
>> [16976.071394] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967260
>> [16976.071413] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967261
>> [16976.071469] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967262
>> [16976.071568] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967263
>> [16976.071677] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967264
>> [16976.071732] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967266
>> [16976.071700] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967265
>> [16976.072068] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816856744+8) 4294967267
>> [16990.311360] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967266
>> [16990.481108] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967265
>> [16990.650832] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967264
>> [16990.820572] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967263
>> [16990.990305] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967262
>> [16991.160043] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967261
>> [16991.329786] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967260
>> [16991.499551] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816856744+8) 4294967259
>> [16996.987839] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754024+8) 4294967260
>> [17047.151615] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754024+8) 4294967259
>> [17115.879905] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967264
>> [17115.889391] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967263
>> [17115.898868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967262
>> [17115.908352] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967261
>> [17115.917837] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967260
>> [17115.927320] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816864616+8) 4294967259
>> [17116.400129] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967260
>> [17116.400218] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967261
>> [17116.400665] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967262
>> [17116.400972] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967263
>> [17116.401012] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967264
>> [17116.401073] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967265
>> [17116.401094] __add_stripe_bio: md127: start ff2721beec8c2fa0(29816873704+8) 4294967266
>> [17173.631876] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967265
>> [17173.631877] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967264
>> [17173.631878] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967263
>> [17173.631880] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967262
>> [17173.631881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967261
>> [17173.631883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967260
>> [17173.631885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(29816873704+8) 4294967259
>> [17201.424309] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032753960+8) 4294967260
>> [17209.574673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032753960+8) 4294967259
>> [17212.610080] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967260
>> [17212.610550] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967261
>> [17226.574613] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967262
>> [17226.574818] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967263
>> [17226.575156] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967264
>> [17226.575266] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967265
>> [17226.575729] __add_stripe_bio: md127: start ff2721beec8c2fa0(3568296+8) 4294967266
>> [17296.342998] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967265
>> [17296.352083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967264
>> [17296.361178] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967263
>> [17296.370271] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967262
>> [17296.379366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967261
>> [17296.388464] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967260
>> [17296.397557] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3568296+8) 4294967259
>> [17297.711781] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967260
>> [17297.712071] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967261
>> [17311.049521] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967262
>> [17311.049595] __add_stripe_bio: md127: start ff2721beec8c2fa0(3559976+8) 4294967263
>> [17391.178025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967262
>> [17391.187127] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967261
>> [17391.196230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967260
>> [17391.205325] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(3559976+8) 4294967259
>> [17397.357339] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967260
>> [17397.358064] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967261
>> [17397.358191] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967262
>> [17406.112100] __add_stripe_bio: md127: start ff2721beec8c2fa0(77864+8) 4294967263
>> [17460.063716] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967262
>> [17460.072623] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967261
>> [17460.081520] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967260
>> [17460.090422] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(77864+8) 4294967259
>> [17464.959257] __add_stripe_bio: md127: start ff2721beec8c2fa0(75624+8) 4294967260
>> [17482.594510] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(75624+8) 4294967259
>> [17508.091641] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967261
>> [17508.091640] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967260
>> [17508.091647] __add_stripe_bio: md127: start ff2721beec8c2fa0(69480+8) 4294967262
>> [17532.456256] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967260
>> [17532.456294] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967261
>> [17532.456309] __add_stripe_bio: md127: start ff2721beec8c2fa0(68008+8) 4294967262
>> [17572.776358] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967261
>> [17572.785257] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967260
>> [17572.794163] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(68008+8) 4294967259
>> [17594.427109] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(12348140520+8) 4294967259
>> [17631.571482] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967262
>> [17633.896087] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967261
>> [17633.904990] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967260
>> [17633.913889] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(59688+8) 4294967259
>> [17640.670153] __add_stripe_bio: md127: start ff2721beec8c2fa0(42344+8) 4294967262
>> [17661.740739] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967264
>> [17661.740869] __add_stripe_bio: md127: start ff2721beec8c2fa0(48232+8) 4294967265
>> [17691.866848] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967264
>> [17691.866850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967263
>> [17691.866851] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967262
>> [17691.866853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967261
>> [17691.866854] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967260
>> [17691.866855] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(48232+8) 4294967259
>> [17711.055783] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967260
>> [17711.055850] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967262
>> [17711.055807] __add_stripe_bio: md127: start ff2721beec8c2fa0(50024+8) 4294967261
>> [17753.659012] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967261
>> [17753.667920] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967260
>> [17753.676822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(50024+8) 4294967259
>> [17756.839874] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032754472+8) 4294967260
>> [17761.904589] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032754472+8) 4294967259
>> [17764.608952] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967260
>> [17764.609156] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967262
>> [17764.609117] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967261
>> [17764.609992] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967263
>> [17785.372101] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967264
>> [17785.480370] __add_stripe_bio: md127: start ff2721beec8c2fa0(39848+8) 4294967265
>> [17831.956995] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967264
>> [17831.965897] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967263
>> [17831.974795] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967262
>> [17831.983692] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967261
>> [17831.992592] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967260
>> [17832.001495] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(39848+8) 4294967259
>> [17834.344591] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755304+8) 4294967260
>> [17843.122828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(15032755304+8) 4294967259
>> [17845.582553] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967260
>> [17845.582666] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967261
>> [17845.583154] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967262
>> [17845.583190] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967264
>> [17845.583179] __add_stripe_bio: md127: start ff2721beec8c2fa0(49576+8) 4294967263
>> [17895.265867] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967263
>> [17895.274772] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967262
>> [17895.283673] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967261
>> [17895.292578] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967260
>> [17895.301470] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(49576+8) 4294967259
>> [17898.047030] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967260
>> [17898.048282] __add_stripe_bio: md127: start ff2721beec8c2fa0(32744+8) 4294967261
>> [17898.049252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967260
>> [17898.049253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(32744+8) 4294967259
>> [17910.857571] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967260
>> [17910.857605] __add_stripe_bio: md127: start ff2721beec8c2fa0(26536+8) 4294967261
>> [17940.805214] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967260
>> [17940.805216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26536+8) 4294967259
>> [17953.692889] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967261
>> [17953.692929] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
>> [17953.693143] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
>> [17953.694264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21992+8) 4294967262
>> [18003.530258] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967261
>> [18003.539162] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967260
>> [18003.548066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21992+8) 4294967259
>> [18009.481434] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26216+8) 4294967259
>> [18009.492293] __add_stripe_bio: md127: start ff2721beec8c2fa0(25704+8) 4294967260
>> [18048.069279] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25704+8) 4294967259
>> [18048.552558] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967260
>> [18048.552796] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967261
>> [18048.552825] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967262
>> [18048.554933] __add_stripe_bio: md127: start ff2721beec8c2fa0(14056+8) 4294967263
>> [18081.808216] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967262
>> [18081.808217] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967261
>> [18081.808219] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967260
>> [18081.808220] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(14056+8) 4294967259
>> [18081.819956] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967260
>> [18081.820706] __add_stripe_bio: md127: start ff2721beec8c2fa0(19816+8) 4294967261
>> [18110.361331] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967260
>> [18110.361332] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(19816+8) 4294967259
>> [18160.565496] __add_stripe_bio: md127: start ff2721beec8c2fa0(10792+8) 4294967263
>> [18169.306917] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967260
>> [18169.306918] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(10792+8) 4294967259
>> [18169.318095] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967260
>> [18169.319212] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967261
>> [18169.394456] __add_stripe_bio: md127: start ff2721beec8c2fa0(20764456+8) 4294967262
>> [18211.597621] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(61800+8) 4294967259
>> [18261.334926] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967260
>> [18261.335380] __add_stripe_bio: md127: start ff2721beec8c2fa0(8296+8) 4294967261
>> [18297.192489] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25501361192+8) 4294967259
>> [18332.815982] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967262
>> [18332.816950] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967263
>> [18332.819467] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967264
>> [18332.819799] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967265
>> [18332.820819] __add_stripe_bio: md127: start ff2721beec8c2fa0(21156200+8) 4294967266
>> [18363.308810] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967265
>> [18363.308813] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967264
>> [18363.308816] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967263
>> [18363.308819] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967262
>> [18363.308822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967261
>> [18363.308825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967260
>> [18363.308828] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(21156200+8) 4294967259
>> [18412.849619] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967262
>> [18412.850582] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967263
>> [18412.850911] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967264
>> [18412.851264] __add_stripe_bio: md127: start ff2721beec8c2fa0(21171752+8) 4294967265
>> [18443.565725] __add_stripe_bio: md127: start ff2721beec8c2fa0(28454161000+8) 4294967260
>> [18473.028395] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967260
>> [18473.029608] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967261
>> [18502.557646] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967262
>> [18502.557723] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967263
>> [18502.558152] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967264
>> [18502.558930] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967265
>> [18502.559041] __add_stripe_bio: md127: start ff2721beec8c2fa0(331145896+8) 4294967266
>> [18502.563022] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967265
>> [18502.563024] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967264
>> [18502.563025] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967263
>> [18502.563026] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967262
>> [18502.563027] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967261
>> [18502.563029] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967260
>> [18502.563030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331145896+8) 4294967259
>> [18560.133303] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331007720+8) 4294967259
>> [18589.564077] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967260
>> [18589.564089] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967261
>> [18589.564670] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967262
>> [18589.565137] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967263
>> [18589.565700] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967264
>> [18589.566003] __add_stripe_bio: md127: start ff2721beec8c2fa0(331158184+8) 4294967265
>> [18638.817896] __add_stripe_bio: md127: start ff2721beec8c2fa0(331165224+8) 4294967260
>> [18639.851587] __add_stripe_bio: md127: start ff2721beec8c2fa0(4563405800+8) 4294967260
>> [18721.230354] __add_stripe_bio: md127: start ff2721beec8c2fa0(6174129128+8) 4294967260
>> [18753.264400] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(6174129128+8) 4294967259
>> [18814.918267] __add_stripe_bio: md127: start ff2721beec8c2fa0(15032755240+8) 4294967260
>> [18817.035728] __add_stripe_bio: md127: start ff2721beec8c2fa0(331192424+8) 4294967267
>> [18817.037803] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967266
>> [18817.037809] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967265
>> [18817.037812] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967264
>> [18817.037815] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967263
>> [18817.037818] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967262
>> [18817.037822] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967261
>> [18817.037825] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967260
>> [18817.037827] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331192424+8) 4294967259
>> [18847.022837] __add_stripe_bio: md127: start ff2721beec8c2fa0(8589935656+8) 4294967260
>> [18931.949431] __add_stripe_bio: md127: start ff2721beec8c2fa0(29530321832+8) 4294967260
>> [19054.852844] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967265
>> [19054.852846] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967264
>> [19054.852847] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967263
>> [19054.852849] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967262
>> [19054.852850] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967261
>> [19054.852852] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967260
>> [19054.852853] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331221672+8) 4294967259
>> [19104.480492] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967264
>> [19104.480523] __add_stripe_bio: md127: start ff2721beec8c2fa0(331234728+8) 4294967265
>> [19162.254665] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967263
>> [19162.254666] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967262
>> [19162.254668] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967261
>> [19162.254669] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967260
>> [19162.254671] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331242344+8) 4294967259
>> [19194.644616] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967260
>> [19194.644618] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331249768+8) 4294967259
>> [19225.730035] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967260
>> [19225.730135] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967261
>> [19225.730341] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967262
>> [19225.733024] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967263
>> [19225.733509] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967264
>> [19225.799551] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967265
>> [19250.693927] __add_stripe_bio: md127: start ff2721beec8c2fa0(331254184+8) 4294967266
>> [19251.803761] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967260
>> [19251.805818] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967261
>> [19251.807214] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967262
>> [19251.807230] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967263
>> [19251.807441] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967264
>> [19251.807684] __add_stripe_bio: md127: start ff2721beec8c2fa0(331259048+8) 4294967265
>> [19284.419215] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967265
>> [19284.419218] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967264
>> [19284.419221] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967263
>> [19284.419222] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967262
>> [19284.419225] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967261
>> [19284.419228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967260
>> [19284.419230] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(331259048+8) 4294967259
>> [19324.123801] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967260
>> [19324.124880] __add_stripe_bio: md127: start ff2721beec8c2fa0(540515944+8) 4294967261
>> [19389.626363] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967264
>> [19389.626366] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967263
>> [19389.626370] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967262
>> [19389.626373] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967261
>> [19389.626376] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967260
>> [19389.626379] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(540515944+8) 4294967259
>> [19411.000068] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967264
>> [19411.000070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967263
>> [19411.000071] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967262
>> [19411.000073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967261
>> [19411.000075] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967260
>> [19411.000076] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(537018984+8) 4294967259
>> [19442.885291] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967260
>> [19442.885494] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967261
>> [19442.885496] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967262
>> [19442.885575] __add_stripe_bio: md127: start ff2721beec8c2fa0(536946088+8) 4294967263
>> [19500.040964] __add_stripe_bio: md127: start ff2721beec8c2fa0(536935976+8) 4294967260
>> [19503.516938] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967263
>> [19503.516939] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967262
>> [19503.516941] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967261
>> [19503.516942] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967260
>> [19503.516944] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536935976+8) 4294967259
>> [19531.506729] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967261
>> [19531.507247] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967262
>> [19531.510481] __add_stripe_bio: md127: start ff2721beec8c2fa0(536929960+8) 4294967263
>> [19559.370264] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967262
>> [19559.370268] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967261
>> [19559.370272] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967260
>> [19559.370275] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536929960+8) 4294967259
>> [19590.464792] __add_stripe_bio: md127: start ff2721beec8c2fa0(7788215976+8) 4294967260
>> [19620.633883] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(7788215976+8) 4294967259
>> [19650.250748] __add_stripe_bio: md127: start ff2721beec8c2fa0(536913192+8) 4294967260
>> [19680.643891] __add_stripe_bio: md127: start ff2721beec8c2fa0(20135746152+8) 4294967260
>> [19708.804030] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(20135746152+8) 4294967259
>> [19737.574540] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967260
>> [19737.574543] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536913640+8) 4294967259
>> [19765.378569] __add_stripe_bio: md127: start ff2721beec8c2fa0(536900904+8) 4294967261
>> [19794.831033] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967260
>> [19821.381894] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536910312+8) 4294967259
>> [19821.429688] __add_stripe_bio: md127: start ff2721beec8c2fa0(536898024+8) 4294967264
>> [19856.960152] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967260
>> [19856.964598] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967261
>> [19856.967055] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967262
>> [19879.048926] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967263
>> [19879.048937] __add_stripe_bio: md127: start ff2721beec8c2fa0(536883944+8) 4294967264
>> [19887.395626] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967263
>> [19887.395631] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967262
>> [19887.395634] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967261
>> [19887.395637] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967260
>> [19887.395639] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536883944+8) 4294967259
>> [19887.406610] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967260
>> [19916.087911] __add_stripe_bio: md127: start ff2721beec8c2fa0(536878120+8) 4294967261
>> [19918.951492] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967260
>> [19947.259645] __add_stripe_bio: md127: start ff2721beec8c2fa0(536876264+8) 4294967261
>> [19983.717648] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967260
>> [19983.717650] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(536876264+8) 4294967259
>> [19983.723154] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
>> [19983.723284] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967261
>> [19983.723330] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967262
>> [19983.723447] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967263
>> [20015.225720] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967264
>> [20015.225737] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967265
>> [20015.233248] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967264
>> [20015.233249] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967263
>> [20015.233250] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967262
>> [20015.233251] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967261
>> [20015.233252] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967260
>> [20015.233253] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555348520+8) 4294967259
>> [20039.634420] __add_stripe_bio: md127: start ff2721beec8c2fa0(555348520+8) 4294967260
>> [20059.881519] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967263
>> [20059.881521] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967262
>> [20059.881522] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967261
>> [20059.881523] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967260
>> [20059.881524] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555355880+8) 4294967259
>> [20091.703960] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967260
>> [20091.704062] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967261
>> [20091.704130] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967262
>> [20091.704371] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967263
>> [20091.704597] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967264
>> [20091.705014] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967265
>> [20091.705043] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967266
>> [20091.705080] __add_stripe_bio: md127: start ff2721beec8c2fa0(555360360+8) 4294967267
>> [20107.172534] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967266
>> [20107.416045] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967265
>> [20107.659569] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967264
>> [20107.903083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967263
>> [20108.146684] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967262
>> [20108.390242] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967261
>> [20108.633740] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967260
>> [20108.877229] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555360360+8) 4294967259
>> [20125.925086] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967260
>> [20125.925103] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967261
>> [20128.394916] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967262
>> [20128.583655] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967263
>> [20132.751983] __add_stripe_bio: md127: start ff2721beec8c2fa0(555361448+8) 4294967264
>> [20138.332744] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967260
>> [20138.333973] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967261
>> [20138.334178] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967262
>> [20138.335009] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967263
>> [20138.335115] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967264
>> [20138.335265] __add_stripe_bio: md127: start ff2721beec8c2fa0(555365992+8) 4294967265
>> [20138.338858] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967264
>> [20138.338860] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967263
>> [20138.338862] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967262
>> [20138.338864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967261
>> [20138.338866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967260
>> [20138.338868] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555365992+8) 4294967259
>> [20166.832981] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967260
>> [20166.833229] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967261
>> [20166.834027] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967262
>> [20196.134888] __add_stripe_bio: md127: start ff2721beec8c2fa0(555368936+8) 4294967263
>> [20199.500306] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967263
>> [20199.500310] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967262
>> [20199.500313] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967261
>> [20199.500317] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967260
>> [20199.500321] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
>> [20199.942600] __add_stripe_bio: md127: start ff2721beec8c2fa0(555373992+8) 4294967260
>> [20199.944367] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555373992+8) 4294967259
>> [20245.088563] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967260
>> [20245.088642] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967261
>> [20245.088687] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967262
>> [20245.088777] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967263
>> [20245.091384] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967264
>> [20245.091670] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967265
>> [20245.091900] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967266
>> [20245.092055] __add_stripe_bio: md127: start ff2721beec8c2fa0(555378728+8) 4294967267
>> [20271.055283] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967266
>> [20271.064573] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967265
>> [20271.073864] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967264
>> [20271.083165] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967263
>> [20275.933633] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967262
>> [20275.942927] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967261
>> [20275.952228] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967260
>> [20275.961535] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(555378728+8) 4294967259
>> [20280.815537] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967260
>> [20280.816905] __add_stripe_bio: md127: start ff2721beec8c2fa0(805450536+8) 4294967261
>> [20442.104270] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967260
>> [20443.448533] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805450536+8) 4294967259
>> [20445.747762] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967260
>> [20445.747900] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967261
>> [20445.747918] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967262
>> [20445.748615] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967263
>> [20494.667635] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967264
>> [20494.667769] __add_stripe_bio: md127: start ff2721beec8c2fa0(805355304+8) 4294967265
>> [20524.978466] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967264
>> [20524.987753] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967263
>> [20524.997049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967262
>> [20525.006349] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967261
>> [20533.505202] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967260
>> [20533.514488] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805355304+8) 4294967259
>> [20535.464796] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967260
>> [20535.465312] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967261
>> [20547.361843] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967262
>> [20547.362543] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967263
>> [20547.362994] __add_stripe_bio: md127: start ff2721beec8c2fa0(805352616+8) 4294967264
>> [20565.098049] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967263
>> [20565.098051] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967262
>> [20565.098052] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967261
>> [20565.098054] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967260
>> [20565.098055] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805352616+8) 4294967259
>> [20565.099574] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
>> [20565.099733] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
>> [20565.099960] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
>> [20609.002609] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
>> [20609.011900] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
>> [20609.021187] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
>> [20612.483895] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967260
>> [20612.484023] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967261
>> [20612.484674] __add_stripe_bio: md127: start ff2721beec8c2fa0(805346088+8) 4294967262
>> [20641.495298] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967261
>> [20641.504590] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967260
>> [20641.513885] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(805346088+8) 4294967259
>>
>> Liebe Grüße,
>> Christian Theune
>>
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-01  8:33                                                                         ` Christian Theune
  2024-11-03 15:54                                                                           ` Christian Theune
  2024-11-04 11:29                                                                           ` Yu Kuai
@ 2024-11-04 11:40                                                                           ` Yu Kuai
  2024-11-04 12:18                                                                             ` Yu Kuai
  2 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-04 11:40 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević

Hi,

在 2024/11/01 16:33, Christian Theune 写道:
> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
> 
> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.
> 
> [  405.347345] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22301786792+8) 4294967259

For this log, let's assume the firt start is from here.
> [  432.542465] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967260
> [  432.542469] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967261
> [  434.272964] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967262
> [  434.273175] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967263
> [  434.273189] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967264
> [  434.273285] __add_stripe_bio: md127: start ff2721beec8c2fa0(22837701992+8) 4294967265
> [  434.274063] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967264
> [  434.274066] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967263
> [  434.274070] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967262
> [  434.274073] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967261
> [  434.274078] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967260
> [  434.274083] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(22837701992+8) 4294967259
> [  434.276609] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967260
> [  434.278939] __add_stripe_bio: md127: start ff2721beec8c2fa0(23374951848+8) 4294967261
> [  464.922354] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967260
> [  464.931833] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23374951848+8) 4294967259
> [  466.964557] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967260
> [  466.964616] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967261
> [  474.399930] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967262
> [  474.451451] __add_stripe_bio: md127: start ff2721beec8c2fa0(23912715112+8) 4294967263
> [  489.447079] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967262
> [  489.456574] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967261
> [  489.466069] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967260
> [  489.475565] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(23912715112+8) 4294967259
> [  491.235517] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967260
> [  491.235602] __add_stripe_bio: md127: start ff2721beec8c2fa0(24448073512+8) 4294967261
> [  498.153108] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967262
> [  498.156307] __add_stripe_bio: md127: start ff2721beec8c2fa0(24716445096+8) 4294967263
> [  530.332619] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967262
> [  530.342110] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967261
> [  530.351595] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967260
> [  530.361082] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24716445096+8) 4294967259
> [  535.176774] __add_stripe_bio: md127: start ff2721beec8c2fa0(24985208424+8) 4294967260
> [  549.125326] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(24985208424+8) 4294967259

Then until now, everything is good, start and end is balanced for this
stripe head.
> [  549.635782] __add_stripe_bio: md127: start ff2721beec8c2fa0(25521770024+8) 4294967261
> [  590.875593] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967260
> [  590.885081] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(25521770024+8) 4294967259
> [  596.973863] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967263
> [  596.973866] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967262
> [  596.973869] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967261
> [  596.973871] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967260
> [  596.973881] handle_stripe_clean_event: md127: end ff2721beec8c2fa0(26057037928+8) 4294967259

Then, oops, this 'sh' start just once here, and end lots of times. It's
unlikely that those end are corresponding to the log much earlier, so
I'm almost convinced that this problem is due to unbalanced start and
end. And the huge number is due to underflow.

Let me dig more. :)

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 11:29                                                                           ` Yu Kuai
@ 2024-11-04 11:51                                                                             ` Christian Theune
  2024-11-04 12:30                                                                               ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-04 11:51 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

> On 4. Nov 2024, at 12:29, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
>>> On 1. Nov 2024, at 08:56, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Hi,
>>> 
>>> ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.
>>> 
>>> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
>>> 
>>> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.
> 
> I can add a checking and trigger a BUG_ON() if you're fine, this way log
> should be much less.

Yeah, I can work with that if it helps.

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 11:40                                                                           ` Yu Kuai
@ 2024-11-04 12:18                                                                             ` Yu Kuai
  2024-11-04 14:45                                                                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-04 12:18 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/04 19:40, Yu Kuai 写道:
> Hi,
> 
> 在 2024/11/01 16:33, Christian Theune 写道:
>> I dug out a different one that goes back longer but even that one 
>> seems like something was missing early on when I didn’t have the 
>> serial console attached.
>>
>> I’m wondering whether this indicates an issue during initialization? 
>> I’m going to reboot the machine and make sure i get the early logs 
>> with those numbers.
>>
>> [  405.347345] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22301786792+8) 4294967259
> 
> For this log, let's assume the firt start is from here.
>> [  432.542465] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  432.542469] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.272964] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.273175] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.273189] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.273285] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(22837701992+8) 4294967265
>> [  434.274063] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967264
>> [  434.274066] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967263
>> [  434.274070] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967262
>> [  434.274073] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967261
>> [  434.274078] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967260
>> [  434.274083] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(22837701992+8) 4294967259
>> [  434.276609] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  434.278939] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23374951848+8) 4294967261
>> [  464.922354] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23374951848+8) 4294967260
>> [  464.931833] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23374951848+8) 4294967259
>> [  466.964557] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  466.964616] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  474.399930] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  474.451451] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(23912715112+8) 4294967263
>> [  489.447079] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23912715112+8) 4294967262
>> [  489.456574] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23912715112+8) 4294967261
>> [  489.466069] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23912715112+8) 4294967260
>> [  489.475565] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(23912715112+8) 4294967259
>> [  491.235517] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(24448073512+8) 4294967260
>> [  491.235602] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(24448073512+8) 4294967261
>> [  498.153108] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  498.156307] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(24716445096+8) 4294967263
>> [  530.332619] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(24716445096+8) 4294967262
>> [  530.342110] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(24716445096+8) 4294967261
>> [  530.351595] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(24716445096+8) 4294967260
>> [  530.361082] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(24716445096+8) 4294967259
>> [  535.176774] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(24985208424+8) 4294967260
>> [  549.125326] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(24985208424+8) 4294967259
> 
> Then until now, everything is good, start and end is balanced for this
> stripe head.
>> [  549.635782] __add_stripe_bio: md127: start 
>> ff2721beec8c2fa0(25521770024+8) 4294967261
>> [  590.875593] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(25521770024+8) 4294967260
>> [  590.885081] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(25521770024+8) 4294967259
>> [  596.973863] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(26057037928+8) 4294967263
>> [  596.973866] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(26057037928+8) 4294967262
>> [  596.973869] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(26057037928+8) 4294967261
>> [  596.973871] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(26057037928+8) 4294967260
>> [  596.973881] handle_stripe_clean_event: md127: end 
>> ff2721beec8c2fa0(26057037928+8) 4294967259
> 
> Then, oops, this 'sh' start just once here, and end lots of times. It's
> unlikely that those end are corresponding to the log much earlier, so
> I'm almost convinced that this problem is due to unbalanced start and
> end. And the huge number is due to underflow.
> 
> Let me dig more. :)

I think I found a problem by code review, can you test the following
patch? (Noted this is still from latest mainline).

Thanks,
Kuai

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index dc2ea636d173..04f32173839a 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4042,6 +4042,8 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                              test_bit(R5_SkipCopy, &dev->flags))) {
                                 /* We can return any write requests */
                                 struct bio *wbi, *wbi2;
+                               bool written = false;
+
                                 pr_debug("Return write for disc %d\n", i);
                                 if (test_and_clear_bit(R5_Discard, 
&dev->flags))
                                         clear_bit(R5_UPTODATE, 
&dev->flags);
@@ -4054,6 +4056,9 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                                 dev->page = dev->orig_page;
                                 wbi = dev->written;
                                 dev->written = NULL;
+                               if (wbi)
+                                       written = true;
+
                                 while (wbi && wbi->bi_iter.bi_sector <
                                         dev->sector + 
RAID5_STRIPE_SECTORS(conf)) {
                                         wbi2 = r5_next_bio(conf, wbi, 
dev->sector);
@@ -4061,10 +4066,13 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                                         bio_endio(wbi);
                                         wbi = wbi2;
                                 }
- 
conf->mddev->bitmap_ops->endwrite(conf->mddev,
-                                       sh->sector, 
RAID5_STRIPE_SECTORS(conf),
-                                       !test_bit(STRIPE_DEGRADED, 
&sh->state),
-                                       false);
+
+                               if (written)
+ 
conf->mddev->bitmap_ops->endwrite(conf->mddev,
+                                               sh->sector, 
RAID5_STRIPE_SECTORS(conf),
+ 
!test_bit(STRIPE_DEGRADED, &sh->state),
+                                               false);
+
                                 if (head_sh->batch_head) {
                                         sh = 
list_first_entry(&sh->batch_list,
                                                               struct 
stripe_head,

> 
> Thanks,
> Kuai
> 


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 11:51                                                                             ` Christian Theune
@ 2024-11-04 12:30                                                                               ` Yu Kuai
  0 siblings, 0 replies; 88+ messages in thread
From: Yu Kuai @ 2024-11-04 12:30 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/04 19:51, Christian Theune 写道:
> Hi,
> 
>> On 4. Nov 2024, at 12:29, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>>>> On 1. Nov 2024, at 08:56, Christian Theune <ct@flyingcircus.io> wrote:
>>>>
>>>> Hi,
>>>>
>>>> ok, so the journal didn’t have that because it was way too much. Looks like I actually need to stick with the serial console logging after all.
>>>>
>>>> I dug out a different one that goes back longer but even that one seems like something was missing early on when I didn’t have the serial console attached.
>>>>
>>>> I’m wondering whether this indicates an issue during initialization? I’m going to reboot the machine and make sure i get the early logs with those numbers.
>>
>> I can add a checking and trigger a BUG_ON() if you're fine, this way log
>> should be much less.
> 
> Yeah, I can work with that if it helps.
> 
> Liebe Grüße,
> Christian Theune
> 

Please try the other patch that I hope can fix this problem, there is no
need for more debugging if the problem can be fixed. :)

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 12:18                                                                             ` Yu Kuai
@ 2024-11-04 14:45                                                                               ` Christian Theune
  2024-11-04 20:04                                                                                 ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-04 14:45 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi

> On 4. Nov 2024, at 13:18, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> 
> I think I found a problem by code review, can you test the following
> patch? (Noted this is still from latest mainline).

Thanks - I can try that. I think I got the gist of it and adapted it to 6.11.6:
https://github.com/flyingcircusio/linux-stable/pull/2/files

(I’m not properly tooled to apply embedded patches from the mailing lists, I need to improve my toolchain if I have to deep dive this far more often in the future.)

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 14:45                                                                               ` Christian Theune
@ 2024-11-04 20:04                                                                                 ` Christian Theune
  2024-11-05  1:20                                                                                   ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-04 20:04 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

unfortunately no joy. After 5000 seconds it got stuck again. Here’s the full list of tracebacks once more.

Happy to add more debugging code if you hand me a patch.

[12240.198494] sysrq: Show Blocked State
[12240.202734] task:kworker/u132:0  state:D stack:0     pid:214   tgid:214   ppid:2      flags:0x00004000
[12240.202740] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.202750] Call Trace:
[12240.202752]  <TASK>
[12240.202756]  __schedule+0x425/0x1460
[12240.202763]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.202769]  schedule+0x27/0xf0
[12240.202773]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.202782]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.202800]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.202807]  raid5_make_request+0x364/0x1290 [raid456]
[12240.202814]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.202816]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.202821]  ? __pfx_woken_wake_function+0x10/0x10
[12240.202825]  ? bio_split_rw+0x141/0x2a0
[12240.202830]  md_handle_request+0x153/0x270 [md_mod]
[12240.202838]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.202841]  __submit_bio+0x190/0x240
[12240.202845]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.202848]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.202851]  ? submit_bio_noacct+0x47/0x5b0
[12240.202854]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.202858]  process_one_work+0x18f/0x3b0
[12240.202863]  worker_thread+0x21f/0x330
[12240.202865]  ? __pfx_worker_thread+0x10/0x10
[12240.202868]  kthread+0xcd/0x100
[12240.202871]  ? __pfx_kthread+0x10/0x10
[12240.202874]  ret_from_fork+0x31/0x50
[12240.202878]  ? __pfx_kthread+0x10/0x10
[12240.202881]  ret_from_fork_asm+0x1a/0x30
[12240.202887]  </TASK>
[12240.202939] task:kworker/u129:3  state:D stack:0     pid:436   tgid:436   ppid:2      flags:0x00004000
[12240.202943] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[12240.203028] Call Trace:
[12240.203029]  <TASK>
[12240.203031]  __schedule+0x425/0x1460
[12240.203034]  ? __blk_flush_plug+0xf5/0x150
[12240.203040]  schedule+0x27/0xf0
[12240.203042]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
[12240.203097]  ? __pfx_default_wake_function+0x10/0x10
[12240.203101]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
[12240.203152]  xlog_write+0x310/0x470 [xfs]
[12240.203203]  xlog_cil_push_work+0x6a5/0x880 [xfs]
[12240.203256]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203260]  process_one_work+0x18f/0x3b0
[12240.203263]  worker_thread+0x21f/0x330
[12240.203266]  ? __pfx_worker_thread+0x10/0x10
[12240.203268]  kthread+0xcd/0x100
[12240.203270]  ? __pfx_kthread+0x10/0x10
[12240.203273]  ret_from_fork+0x31/0x50
[12240.203276]  ? __pfx_kthread+0x10/0x10
[12240.203278]  ret_from_fork_asm+0x1a/0x30
[12240.203283]  </TASK>
[12240.203289] task:kworker/u132:1  state:D stack:0     pid:490   tgid:490   ppid:2      flags:0x00004000
[12240.203293] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203296] Call Trace:
[12240.203298]  <TASK>
[12240.203299]  __schedule+0x425/0x1460
[12240.203302]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203305]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203307]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
[12240.203313]  schedule+0x27/0xf0
[12240.203316]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203323]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203326]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203331]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203336]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203338]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203342]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203345]  ? bio_split_rw+0x141/0x2a0
[12240.203349]  md_handle_request+0x153/0x270 [md_mod]
[12240.203356]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203359]  __submit_bio+0x190/0x240
[12240.203363]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203366]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203368]  ? submit_bio_noacct+0x47/0x5b0
[12240.203372]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203375]  process_one_work+0x18f/0x3b0
[12240.203377]  worker_thread+0x21f/0x330
[12240.203380]  ? __pfx_worker_thread+0x10/0x10
[12240.203382]  kthread+0xcd/0x100
[12240.203385]  ? __pfx_kthread+0x10/0x10
[12240.203387]  ret_from_fork+0x31/0x50
[12240.203390]  ? __pfx_kthread+0x10/0x10
[12240.203392]  ret_from_fork_asm+0x1a/0x30
[12240.203396]  </TASK>
[12240.203405] task:kworker/u132:2  state:D stack:0     pid:504   tgid:504   ppid:2      flags:0x00004000
[12240.203409] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203411] Call Trace:
[12240.203412]  <TASK>
[12240.203414]  __schedule+0x425/0x1460
[12240.203419]  schedule+0x27/0xf0
[12240.203422]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203427]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203430]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203435]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203440]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203443]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203446]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203448]  ? bio_split_rw+0x141/0x2a0
[12240.203453]  md_handle_request+0x153/0x270 [md_mod]
[12240.203459]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203462]  __submit_bio+0x190/0x240
[12240.203466]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203469]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203472]  ? submit_bio_noacct+0x47/0x5b0
[12240.203475]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203478]  process_one_work+0x18f/0x3b0
[12240.203481]  worker_thread+0x21f/0x330
[12240.203483]  ? __pfx_worker_thread+0x10/0x10
[12240.203485]  ? __pfx_worker_thread+0x10/0x10
[12240.203487]  kthread+0xcd/0x100
[12240.203489]  ? __pfx_kthread+0x10/0x10
[12240.203492]  ret_from_fork+0x31/0x50
[12240.203495]  ? __pfx_kthread+0x10/0x10
[12240.203497]  ret_from_fork_asm+0x1a/0x30
[12240.203501]  </TASK>
[12240.203557] task:kworker/u132:3  state:D stack:0     pid:1731  tgid:1731  ppid:2      flags:0x00004000
[12240.203561] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203563] Call Trace:
[12240.203565]  <TASK>
[12240.203567]  __schedule+0x425/0x1460
[12240.203572]  schedule+0x27/0xf0
[12240.203575]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203580]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203583]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203588]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203593]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203595]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203598]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203600]  ? bio_split_rw+0x141/0x2a0
[12240.203605]  md_handle_request+0x153/0x270 [md_mod]
[12240.203611]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203614]  __submit_bio+0x190/0x240
[12240.203618]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203621]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203624]  ? submit_bio_noacct+0x47/0x5b0
[12240.203627]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203630]  process_one_work+0x18f/0x3b0
[12240.203633]  worker_thread+0x21f/0x330
[12240.203635]  ? __pfx_worker_thread+0x10/0x10
[12240.203637]  ? __pfx_worker_thread+0x10/0x10
[12240.203639]  kthread+0xcd/0x100
[12240.203641]  ? __pfx_kthread+0x10/0x10
[12240.203644]  ret_from_fork+0x31/0x50
[12240.203646]  ? __pfx_kthread+0x10/0x10
[12240.203649]  ret_from_fork_asm+0x1a/0x30
[12240.203653]  </TASK>
[12240.203682] task:kworker/u132:4  state:D stack:0     pid:2131  tgid:2131  ppid:2      flags:0x00004000
[12240.203686] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203689] Call Trace:
[12240.203690]  <TASK>
[12240.203691]  __schedule+0x425/0x1460
[12240.203697]  schedule+0x27/0xf0
[12240.203699]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203705]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203707]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203712]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203718]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203720]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203723]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203725]  ? bio_split_rw+0x141/0x2a0
[12240.203730]  md_handle_request+0x153/0x270 [md_mod]
[12240.203736]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203739]  __submit_bio+0x190/0x240
[12240.203743]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203746]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203749]  ? submit_bio_noacct+0x47/0x5b0
[12240.203752]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203754]  process_one_work+0x18f/0x3b0
[12240.203758]  worker_thread+0x21f/0x330
[12240.203760]  ? __pfx_worker_thread+0x10/0x10
[12240.203762]  ? __pfx_worker_thread+0x10/0x10
[12240.203764]  kthread+0xcd/0x100
[12240.203766]  ? __pfx_kthread+0x10/0x10
[12240.203769]  ret_from_fork+0x31/0x50
[12240.203771]  ? __pfx_kthread+0x10/0x10
[12240.203774]  ret_from_fork_asm+0x1a/0x30
[12240.203778]  </TASK>
[12240.203780] task:kworker/u132:5  state:D stack:0     pid:2132  tgid:2132  ppid:2      flags:0x00004000
[12240.203784] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203793] Call Trace:
[12240.203794]  <TASK>
[12240.203795]  __schedule+0x425/0x1460
[12240.203800]  schedule+0x27/0xf0
[12240.203803]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203808]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203812]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203817]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203821]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203824]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203827]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203829]  ? bio_split_rw+0x141/0x2a0
[12240.203834]  md_handle_request+0x153/0x270 [md_mod]
[12240.203840]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203843]  __submit_bio+0x190/0x240
[12240.203847]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203850]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203853]  ? submit_bio_noacct+0x47/0x5b0
[12240.203856]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203859]  process_one_work+0x18f/0x3b0
[12240.203862]  worker_thread+0x21f/0x330
[12240.203864]  ? __pfx_worker_thread+0x10/0x10
[12240.203866]  kthread+0xcd/0x100
[12240.203869]  ? __pfx_kthread+0x10/0x10
[12240.203871]  ret_from_fork+0x31/0x50
[12240.203874]  ? __pfx_kthread+0x10/0x10
[12240.203876]  ret_from_fork_asm+0x1a/0x30
[12240.203880]  </TASK>
[12240.203882] task:kworker/u132:7  state:D stack:0     pid:2134  tgid:2134  ppid:2      flags:0x00004000
[12240.203885] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203888] Call Trace:
[12240.203889]  <TASK>
[12240.203891]  __schedule+0x425/0x1460
[12240.203893]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203898]  schedule+0x27/0xf0
[12240.203901]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.203906]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.203909]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.203914]  raid5_make_request+0x364/0x1290 [raid456]
[12240.203919]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203922]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.203925]  ? __pfx_woken_wake_function+0x10/0x10
[12240.203927]  ? bio_split_rw+0x141/0x2a0
[12240.203932]  md_handle_request+0x153/0x270 [md_mod]
[12240.203938]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203941]  __submit_bio+0x190/0x240
[12240.203945]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.203948]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.203950]  ? submit_bio_noacct+0x47/0x5b0
[12240.203953]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.203956]  process_one_work+0x18f/0x3b0
[12240.203959]  worker_thread+0x21f/0x330
[12240.203961]  ? __pfx_worker_thread+0x10/0x10
[12240.203964]  kthread+0xcd/0x100
[12240.203966]  ? __pfx_kthread+0x10/0x10
[12240.203969]  ret_from_fork+0x31/0x50
[12240.203971]  ? __pfx_kthread+0x10/0x10
[12240.203974]  ret_from_fork_asm+0x1a/0x30
[12240.203978]  </TASK>
[12240.203981] task:kworker/u132:9  state:D stack:0     pid:2136  tgid:2136  ppid:2      flags:0x00004000
[12240.203984] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.203987] Call Trace:
[12240.203988]  <TASK>
[12240.203989]  __schedule+0x425/0x1460
[12240.203995]  schedule+0x27/0xf0
[12240.203997]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204003]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204006]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204011]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204016]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204018]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204021]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204023]  ? bio_split_rw+0x141/0x2a0
[12240.204028]  md_handle_request+0x153/0x270 [md_mod]
[12240.204034]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204037]  __submit_bio+0x190/0x240
[12240.204041]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204044]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204046]  ? submit_bio_noacct+0x47/0x5b0
[12240.204050]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204052]  process_one_work+0x18f/0x3b0
[12240.204055]  worker_thread+0x21f/0x330
[12240.204058]  ? __pfx_worker_thread+0x10/0x10
[12240.204060]  kthread+0xcd/0x100
[12240.204062]  ? __pfx_kthread+0x10/0x10
[12240.204065]  ret_from_fork+0x31/0x50
[12240.204067]  ? __pfx_kthread+0x10/0x10
[12240.204069]  ret_from_fork_asm+0x1a/0x30
[12240.204074]  </TASK>
[12240.204076] task:kworker/u132:11 state:D stack:0     pid:2138  tgid:2138  ppid:2      flags:0x00004000
[12240.204078] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.204081] Call Trace:
[12240.204082]  <TASK>
[12240.204084]  __schedule+0x425/0x1460
[12240.204089]  schedule+0x27/0xf0
[12240.204092]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204097]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204100]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204105]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204110]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204113]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204116]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204119]  ? bio_split_rw+0x141/0x2a0
[12240.204123]  md_handle_request+0x153/0x270 [md_mod]
[12240.204129]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204132]  __submit_bio+0x190/0x240
[12240.204136]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204139]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204142]  ? submit_bio_noacct+0x47/0x5b0
[12240.204145]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204148]  process_one_work+0x18f/0x3b0
[12240.204151]  worker_thread+0x21f/0x330
[12240.204154]  ? __pfx_worker_thread+0x10/0x10
[12240.204156]  kthread+0xcd/0x100
[12240.204159]  ? __pfx_kthread+0x10/0x10
[12240.204161]  ret_from_fork+0x31/0x50
[12240.204164]  ? __pfx_kthread+0x10/0x10
[12240.204166]  ret_from_fork_asm+0x1a/0x30
[12240.204171]  </TASK>
[12240.204172] task:kworker/u132:12 state:D stack:0     pid:2139  tgid:2139  ppid:2      flags:0x00004000
[12240.204175] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.204178] Call Trace:
[12240.204179]  <TASK>
[12240.204180]  __schedule+0x425/0x1460
[12240.204186]  schedule+0x27/0xf0
[12240.204188]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204194]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204197]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204202]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204207]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204209]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204212]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204215]  ? bio_split_rw+0x141/0x2a0
[12240.204220]  md_handle_request+0x153/0x270 [md_mod]
[12240.204226]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204229]  __submit_bio+0x190/0x240
[12240.204233]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204235]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204238]  ? submit_bio_noacct+0x47/0x5b0
[12240.204241]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204244]  process_one_work+0x18f/0x3b0
[12240.204247]  worker_thread+0x21f/0x330
[12240.204249]  ? __pfx_worker_thread+0x10/0x10
[12240.204251]  kthread+0xcd/0x100
[12240.204254]  ? __pfx_kthread+0x10/0x10
[12240.204257]  ret_from_fork+0x31/0x50
[12240.204259]  ? __pfx_kthread+0x10/0x10
[12240.204261]  ret_from_fork_asm+0x1a/0x30
[12240.204266]  </TASK>
[12240.204267] task:kworker/u132:14 state:D stack:0     pid:2141  tgid:2141  ppid:2      flags:0x00004000
[12240.204270] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.204273] Call Trace:
[12240.204274]  <TASK>
[12240.204276]  __schedule+0x425/0x1460
[12240.204278]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204283]  schedule+0x27/0xf0
[12240.204286]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204291]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204294]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204299]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204304]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204306]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204309]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204312]  ? bio_split_rw+0x141/0x2a0
[12240.204317]  md_handle_request+0x153/0x270 [md_mod]
[12240.204322]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204326]  __submit_bio+0x190/0x240
[12240.204330]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204333]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204335]  ? submit_bio_noacct+0x47/0x5b0
[12240.204338]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204341]  process_one_work+0x18f/0x3b0
[12240.204344]  worker_thread+0x21f/0x330
[12240.204346]  ? __pfx_worker_thread+0x10/0x10
[12240.204348]  kthread+0xcd/0x100
[12240.204351]  ? __pfx_kthread+0x10/0x10
[12240.204354]  ret_from_fork+0x31/0x50
[12240.204356]  ? __pfx_kthread+0x10/0x10
[12240.204358]  ret_from_fork_asm+0x1a/0x30
[12240.204363]  </TASK>
[12240.204364] task:kworker/u132:15 state:D stack:0     pid:2142  tgid:2142  ppid:2      flags:0x00004000
[12240.204367] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.204370] Call Trace:
[12240.204371]  <TASK>
[12240.204372]  __schedule+0x425/0x1460
[12240.204377]  schedule+0x27/0xf0
[12240.204380]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204385]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204388]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204393]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204398]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204401]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204403]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204406]  ? bio_split_rw+0x141/0x2a0
[12240.204410]  md_handle_request+0x153/0x270 [md_mod]
[12240.204416]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204419]  __submit_bio+0x190/0x240
[12240.204424]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204427]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204429]  ? submit_bio_noacct+0x47/0x5b0
[12240.204432]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204435]  process_one_work+0x18f/0x3b0
[12240.204438]  worker_thread+0x21f/0x330
[12240.204441]  ? __pfx_worker_thread+0x10/0x10
[12240.204443]  kthread+0xcd/0x100
[12240.204445]  ? __pfx_kthread+0x10/0x10
[12240.204448]  ret_from_fork+0x31/0x50
[12240.204450]  ? __pfx_kthread+0x10/0x10
[12240.204453]  ret_from_fork_asm+0x1a/0x30
[12240.204457]  </TASK>
[12240.204458] task:kworker/u132:16 state:D stack:0     pid:2143  tgid:2143  ppid:2      flags:0x00004000
[12240.204461] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.204464] Call Trace:
[12240.204465]  <TASK>
[12240.204466]  __schedule+0x425/0x1460
[12240.204471]  schedule+0x27/0xf0
[12240.204474]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.204480]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.204482]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.204487]  raid5_make_request+0x364/0x1290 [raid456]
[12240.204493]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204495]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.204497]  ? __pfx_woken_wake_function+0x10/0x10
[12240.204500]  ? bio_split_rw+0x141/0x2a0
[12240.204505]  md_handle_request+0x153/0x270 [md_mod]
[12240.204511]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204514]  __submit_bio+0x190/0x240
[12240.204518]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.204521]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.204524]  ? submit_bio_noacct+0x47/0x5b0
[12240.204527]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.204530]  process_one_work+0x18f/0x3b0
[12240.204532]  worker_thread+0x21f/0x330
[12240.204535]  ? __pfx_worker_thread+0x10/0x10
[12240.204537]  kthread+0xcd/0x100
[12240.204540]  ? __pfx_kthread+0x10/0x10
[12240.204542]  ret_from_fork+0x31/0x50
[12240.204544]  ? __pfx_kthread+0x10/0x10
[12240.204547]  ret_from_fork_asm+0x1a/0x30
[12240.204551]  </TASK>
[12240.204553] task:kworker/u132:17 state:D stack:0     pid:2144  tgid:2144  ppid:2      flags:0x00004000
[12240.204556] Workqueue: writeback wb_workfn (flush-254:4)
[12240.204560] Call Trace:
[12240.204561]  <TASK>
[12240.204563]  __schedule+0x425/0x1460
[12240.204568]  schedule+0x27/0xf0
[12240.204570]  schedule_timeout+0x15d/0x170
[12240.204575]  __wait_for_common+0x90/0x1c0
[12240.204577]  ? __pfx_schedule_timeout+0x10/0x10
[12240.204581]  xfs_buf_iowait+0x1c/0xc0 [xfs]
[12240.204655]  __xfs_buf_submit+0x132/0x1e0 [xfs]
[12240.204708]  xfs_buf_read_map+0x129/0x2a0 [xfs]
[12240.204761]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[12240.204837]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[12240.204910]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[12240.204972]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
[12240.205025]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
[12240.205077]  xfs_btree_lookup+0xea/0x500 [xfs]
[12240.205129]  xfs_alloc_fixup_trees+0x70/0x520 [xfs]
[12240.205195]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
[12240.205245]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
[12240.205297]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
[12240.205347]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
[12240.205416]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
[12240.205471]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
[12240.205535]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
[12240.205585]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
[12240.205639]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
[12240.205689]  xfs_map_blocks+0x257/0x420 [xfs]
[12240.205762]  iomap_writepages+0x271/0x9b0
[12240.205768]  xfs_vm_writepages+0x67/0x90 [xfs]
[12240.205826]  do_writepages+0x76/0x260
[12240.205832]  __writeback_single_inode+0x3d/0x350
[12240.205836]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205838]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205842]  writeback_sb_inodes+0x21c/0x4e0
[12240.205853]  __writeback_inodes_wb+0x4c/0xf0
[12240.205856]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205859]  wb_writeback+0x193/0x310
[12240.205864]  wb_workfn+0x357/0x450
[12240.205868]  process_one_work+0x18f/0x3b0
[12240.205871]  worker_thread+0x21f/0x330
[12240.205874]  ? __pfx_worker_thread+0x10/0x10
[12240.205876]  kthread+0xcd/0x100
[12240.205878]  ? __pfx_kthread+0x10/0x10
[12240.205881]  ret_from_fork+0x31/0x50
[12240.205884]  ? __pfx_kthread+0x10/0x10
[12240.205886]  ret_from_fork_asm+0x1a/0x30
[12240.205891]  </TASK>
[12240.205893] task:kworker/u132:18 state:D stack:0     pid:2145  tgid:2145  ppid:2      flags:0x00004000
[12240.205896] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.205899] Call Trace:
[12240.205900]  <TASK>
[12240.205901]  __schedule+0x425/0x1460
[12240.205907]  schedule+0x27/0xf0
[12240.205910]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.205916]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.205919]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.205925]  raid5_make_request+0x364/0x1290 [raid456]
[12240.205930]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205933]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.205936]  ? __pfx_woken_wake_function+0x10/0x10
[12240.205939]  ? bio_split_rw+0x141/0x2a0
[12240.205943]  md_handle_request+0x153/0x270 [md_mod]
[12240.205950]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205953]  __submit_bio+0x190/0x240
[12240.205957]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.205961]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.205963]  ? submit_bio_noacct+0x47/0x5b0
[12240.205966]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.205970]  process_one_work+0x18f/0x3b0
[12240.205972]  worker_thread+0x21f/0x330
[12240.205975]  ? __pfx_worker_thread+0x10/0x10
[12240.205977]  kthread+0xcd/0x100
[12240.205980]  ? __pfx_kthread+0x10/0x10
[12240.205982]  ret_from_fork+0x31/0x50
[12240.205985]  ? __pfx_kthread+0x10/0x10
[12240.205987]  ret_from_fork_asm+0x1a/0x30
[12240.205992]  </TASK>
[12240.205993] task:kworker/u132:19 state:D stack:0     pid:2146  tgid:2146  ppid:2      flags:0x00004000
[12240.205996] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.205998] Call Trace:
[12240.205999]  <TASK>
[12240.206001]  __schedule+0x425/0x1460
[12240.206007]  schedule+0x27/0xf0
[12240.206009]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206015]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206018]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206022]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206028]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206030]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206033]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206035]  ? bio_split_rw+0x141/0x2a0
[12240.206040]  md_handle_request+0x153/0x270 [md_mod]
[12240.206046]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206050]  __submit_bio+0x190/0x240
[12240.206053]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206057]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206059]  ? submit_bio_noacct+0x47/0x5b0
[12240.206062]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206065]  process_one_work+0x18f/0x3b0
[12240.206068]  worker_thread+0x21f/0x330
[12240.206071]  ? __pfx_worker_thread+0x10/0x10
[12240.206073]  kthread+0xcd/0x100
[12240.206075]  ? __pfx_kthread+0x10/0x10
[12240.206078]  ret_from_fork+0x31/0x50
[12240.206080]  ? __pfx_kthread+0x10/0x10
[12240.206083]  ret_from_fork_asm+0x1a/0x30
[12240.206087]  </TASK>
[12240.206089] task:kworker/u132:20 state:D stack:0     pid:2147  tgid:2147  ppid:2      flags:0x00004000
[12240.206091] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206094] Call Trace:
[12240.206095]  <TASK>
[12240.206097]  __schedule+0x425/0x1460
[12240.206102]  schedule+0x27/0xf0
[12240.206105]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206110]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206113]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206118]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206123]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206126]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206128]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206131]  ? bio_split_rw+0x141/0x2a0
[12240.206136]  md_handle_request+0x153/0x270 [md_mod]
[12240.206142]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206145]  __submit_bio+0x190/0x240
[12240.206149]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206152]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206155]  ? submit_bio_noacct+0x47/0x5b0
[12240.206158]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206161]  process_one_work+0x18f/0x3b0
[12240.206163]  worker_thread+0x21f/0x330
[12240.206166]  ? __pfx_worker_thread+0x10/0x10
[12240.206168]  kthread+0xcd/0x100
[12240.206171]  ? __pfx_kthread+0x10/0x10
[12240.206173]  ret_from_fork+0x31/0x50
[12240.206175]  ? __pfx_kthread+0x10/0x10
[12240.206178]  ret_from_fork_asm+0x1a/0x30
[12240.206182]  </TASK>
[12240.206184] task:kworker/u132:22 state:D stack:0     pid:2149  tgid:2149  ppid:2      flags:0x00004000
[12240.206187] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206189] Call Trace:
[12240.206190]  <TASK>
[12240.206192]  __schedule+0x425/0x1460
[12240.206197]  schedule+0x27/0xf0
[12240.206199]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206205]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206208]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206213]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206218]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206220]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206223]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206226]  ? bio_split_rw+0x141/0x2a0
[12240.206230]  md_handle_request+0x153/0x270 [md_mod]
[12240.206236]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206240]  __submit_bio+0x190/0x240
[12240.206243]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206246]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206249]  ? submit_bio_noacct+0x47/0x5b0
[12240.206252]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206255]  process_one_work+0x18f/0x3b0
[12240.206258]  worker_thread+0x21f/0x330
[12240.206260]  ? __pfx_worker_thread+0x10/0x10
[12240.206262]  kthread+0xcd/0x100
[12240.206265]  ? __pfx_kthread+0x10/0x10
[12240.206268]  ret_from_fork+0x31/0x50
[12240.206270]  ? __pfx_kthread+0x10/0x10
[12240.206272]  ret_from_fork_asm+0x1a/0x30
[12240.206277]  </TASK>
[12240.206279] task:kworker/u132:24 state:D stack:0     pid:2151  tgid:2151  ppid:2      flags:0x00004000
[12240.206281] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206284] Call Trace:
[12240.206284]  <TASK>
[12240.206286]  __schedule+0x425/0x1460
[12240.206289]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206294]  schedule+0x27/0xf0
[12240.206296]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206302]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206305]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206310]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206315]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206318]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206320]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206323]  ? bio_split_rw+0x141/0x2a0
[12240.206327]  md_handle_request+0x153/0x270 [md_mod]
[12240.206333]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206336]  __submit_bio+0x190/0x240
[12240.206340]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206344]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206346]  ? submit_bio_noacct+0x47/0x5b0
[12240.206349]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206352]  process_one_work+0x18f/0x3b0
[12240.206355]  worker_thread+0x21f/0x330
[12240.206357]  ? __pfx_worker_thread+0x10/0x10
[12240.206359]  kthread+0xcd/0x100
[12240.206362]  ? __pfx_kthread+0x10/0x10
[12240.206365]  ret_from_fork+0x31/0x50
[12240.206367]  ? __pfx_kthread+0x10/0x10
[12240.206370]  ret_from_fork_asm+0x1a/0x30
[12240.206374]  </TASK>
[12240.206376] task:kworker/u132:25 state:D stack:0     pid:2152  tgid:2152  ppid:2      flags:0x00004000
[12240.206379] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206381] Call Trace:
[12240.206382]  <TASK>
[12240.206384]  __schedule+0x425/0x1460
[12240.206389]  schedule+0x27/0xf0
[12240.206391]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206397]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206400]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206405]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206410]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206412]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206415]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206418]  ? bio_split_rw+0x141/0x2a0
[12240.206422]  md_handle_request+0x153/0x270 [md_mod]
[12240.206428]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206432]  __submit_bio+0x190/0x240
[12240.206436]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206438]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206441]  ? submit_bio_noacct+0x47/0x5b0
[12240.206444]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206447]  process_one_work+0x18f/0x3b0
[12240.206450]  worker_thread+0x21f/0x330
[12240.206452]  ? __pfx_worker_thread+0x10/0x10
[12240.206454]  kthread+0xcd/0x100
[12240.206457]  ? __pfx_kthread+0x10/0x10
[12240.206460]  ret_from_fork+0x31/0x50
[12240.206462]  ? __pfx_kthread+0x10/0x10
[12240.206464]  ret_from_fork_asm+0x1a/0x30
[12240.206469]  </TASK>
[12240.206470] task:kworker/u132:26 state:D stack:0     pid:2153  tgid:2153  ppid:2      flags:0x00004000
[12240.206473] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206475] Call Trace:
[12240.206476]  <TASK>
[12240.206479]  __schedule+0x425/0x1460
[12240.206484]  schedule+0x27/0xf0
[12240.206486]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206492]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206495]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206499]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206504]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206507]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206510]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206512]  ? bio_split_rw+0x141/0x2a0
[12240.206517]  md_handle_request+0x153/0x270 [md_mod]
[12240.206523]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206526]  __submit_bio+0x190/0x240
[12240.206530]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206533]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206536]  ? submit_bio_noacct+0x47/0x5b0
[12240.206539]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206541]  process_one_work+0x18f/0x3b0
[12240.206544]  worker_thread+0x21f/0x330
[12240.206547]  ? __pfx_worker_thread+0x10/0x10
[12240.206549]  kthread+0xcd/0x100
[12240.206551]  ? __pfx_kthread+0x10/0x10
[12240.206554]  ret_from_fork+0x31/0x50
[12240.206556]  ? __pfx_kthread+0x10/0x10
[12240.206559]  ret_from_fork_asm+0x1a/0x30
[12240.206563]  </TASK>
[12240.206565] task:kworker/u132:27 state:D stack:0     pid:2154  tgid:2154  ppid:2      flags:0x00004000
[12240.206567] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206570] Call Trace:
[12240.206570]  <TASK>
[12240.206572]  __schedule+0x425/0x1460
[12240.206575]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206580]  schedule+0x27/0xf0
[12240.206582]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206588]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206590]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206595]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206600]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206603]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206605]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206608]  ? bio_split_rw+0x141/0x2a0
[12240.206613]  md_handle_request+0x153/0x270 [md_mod]
[12240.206619]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206622]  __submit_bio+0x190/0x240
[12240.206626]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206629]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206631]  ? submit_bio_noacct+0x47/0x5b0
[12240.206635]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206638]  process_one_work+0x18f/0x3b0
[12240.206640]  worker_thread+0x21f/0x330
[12240.206643]  ? __pfx_worker_thread+0x10/0x10
[12240.206645]  kthread+0xcd/0x100
[12240.206647]  ? __pfx_kthread+0x10/0x10
[12240.206650]  ret_from_fork+0x31/0x50
[12240.206652]  ? __pfx_kthread+0x10/0x10
[12240.206655]  ret_from_fork_asm+0x1a/0x30
[12240.206659]  </TASK>
[12240.206661] task:kworker/u132:28 state:D stack:0     pid:2155  tgid:2155  ppid:2      flags:0x00004000
[12240.206663] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206666] Call Trace:
[12240.206667]  <TASK>
[12240.206668]  __schedule+0x425/0x1460
[12240.206673]  schedule+0x27/0xf0
[12240.206676]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206681]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206684]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206689]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206694]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206697]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206699]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206702]  ? bio_split_rw+0x141/0x2a0
[12240.206707]  md_handle_request+0x153/0x270 [md_mod]
[12240.206713]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206716]  __submit_bio+0x190/0x240
[12240.206720]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206722]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206725]  ? submit_bio_noacct+0x47/0x5b0
[12240.206728]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206731]  process_one_work+0x18f/0x3b0
[12240.206734]  worker_thread+0x21f/0x330
[12240.206736]  ? __pfx_worker_thread+0x10/0x10
[12240.206738]  kthread+0xcd/0x100
[12240.206741]  ? __pfx_kthread+0x10/0x10
[12240.206744]  ret_from_fork+0x31/0x50
[12240.206746]  ? __pfx_kthread+0x10/0x10
[12240.206748]  ret_from_fork_asm+0x1a/0x30
[12240.206753]  </TASK>
[12240.206754] task:kworker/u132:30 state:D stack:0     pid:2157  tgid:2157  ppid:2      flags:0x00004000
[12240.206756] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206759] Call Trace:
[12240.206760]  <TASK>
[12240.206762]  __schedule+0x425/0x1460
[12240.206767]  schedule+0x27/0xf0
[12240.206769]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206775]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206778]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206783]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206793]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206796]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206798]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206801]  ? bio_split_rw+0x141/0x2a0
[12240.206806]  md_handle_request+0x153/0x270 [md_mod]
[12240.206813]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206816]  __submit_bio+0x190/0x240
[12240.206821]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206824]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206826]  ? submit_bio_noacct+0x47/0x5b0
[12240.206829]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206833]  process_one_work+0x18f/0x3b0
[12240.206836]  worker_thread+0x21f/0x330
[12240.206838]  ? __pfx_worker_thread+0x10/0x10
[12240.206840]  kthread+0xcd/0x100
[12240.206843]  ? __pfx_kthread+0x10/0x10
[12240.206845]  ret_from_fork+0x31/0x50
[12240.206848]  ? __pfx_kthread+0x10/0x10
[12240.206850]  ret_from_fork_asm+0x1a/0x30
[12240.206854]  </TASK>
[12240.206898] task:kworker/u132:34 state:D stack:0     pid:3394  tgid:3394  ppid:2      flags:0x00004000
[12240.206901] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.206904] Call Trace:
[12240.206905]  <TASK>
[12240.206907]  __schedule+0x425/0x1460
[12240.206910]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206912]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
[12240.206918]  schedule+0x27/0xf0
[12240.206921]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.206926]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.206929]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.206934]  raid5_make_request+0x364/0x1290 [raid456]
[12240.206939]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206942]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.206945]  ? __pfx_woken_wake_function+0x10/0x10
[12240.206947]  ? bio_split_rw+0x141/0x2a0
[12240.206952]  md_handle_request+0x153/0x270 [md_mod]
[12240.206958]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206961]  __submit_bio+0x190/0x240
[12240.206965]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.206968]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.206971]  ? submit_bio_noacct+0x47/0x5b0
[12240.206974]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.206977]  process_one_work+0x18f/0x3b0
[12240.206979]  worker_thread+0x21f/0x330
[12240.206982]  ? __pfx_worker_thread+0x10/0x10
[12240.206984]  ? __pfx_worker_thread+0x10/0x10
[12240.206986]  kthread+0xcd/0x100
[12240.206988]  ? __pfx_kthread+0x10/0x10
[12240.206991]  ret_from_fork+0x31/0x50
[12240.206994]  ? __pfx_kthread+0x10/0x10
[12240.206996]  ret_from_fork_asm+0x1a/0x30
[12240.207000]  </TASK>
[12240.207002] task:kworker/u132:33 state:D stack:0     pid:5300  tgid:5300  ppid:2      flags:0x00004000
[12240.207005] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207008] Call Trace:
[12240.207009]  <TASK>
[12240.207010]  __schedule+0x425/0x1460
[12240.207015]  schedule+0x27/0xf0
[12240.207018]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207024]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207027]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207032]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207037]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207040]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207042]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207045]  ? bio_split_rw+0x141/0x2a0
[12240.207050]  md_handle_request+0x153/0x270 [md_mod]
[12240.207056]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207059]  __submit_bio+0x190/0x240
[12240.207063]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207066]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207068]  ? submit_bio_noacct+0x47/0x5b0
[12240.207071]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207074]  process_one_work+0x18f/0x3b0
[12240.207077]  worker_thread+0x21f/0x330
[12240.207080]  ? __pfx_worker_thread+0x10/0x10
[12240.207082]  kthread+0xcd/0x100
[12240.207084]  ? __pfx_kthread+0x10/0x10
[12240.207087]  ret_from_fork+0x31/0x50
[12240.207089]  ? __pfx_kthread+0x10/0x10
[12240.207092]  ret_from_fork_asm+0x1a/0x30
[12240.207096]  </TASK>
[12240.207098] task:kworker/u132:35 state:D stack:0     pid:6467  tgid:6467  ppid:2      flags:0x00004000
[12240.207100] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207103] Call Trace:
[12240.207104]  <TASK>
[12240.207105]  __schedule+0x425/0x1460
[12240.207111]  schedule+0x27/0xf0
[12240.207113]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207119]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207122]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207126]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207131]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207134]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207137]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207139]  ? bio_split_rw+0x141/0x2a0
[12240.207144]  md_handle_request+0x153/0x270 [md_mod]
[12240.207150]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207153]  __submit_bio+0x190/0x240
[12240.207157]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207160]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207163]  ? submit_bio_noacct+0x47/0x5b0
[12240.207166]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207169]  process_one_work+0x18f/0x3b0
[12240.207172]  worker_thread+0x21f/0x330
[12240.207174]  ? __pfx_worker_thread+0x10/0x10
[12240.207176]  ? __pfx_worker_thread+0x10/0x10
[12240.207178]  kthread+0xcd/0x100
[12240.207180]  ? __pfx_kthread+0x10/0x10
[12240.207184]  ret_from_fork+0x31/0x50
[12240.207186]  ? __pfx_kthread+0x10/0x10
[12240.207188]  ret_from_fork_asm+0x1a/0x30
[12240.207193]  </TASK>
[12240.207194] task:kworker/u132:13 state:D stack:0     pid:9685  tgid:9685  ppid:2      flags:0x00004000
[12240.207197] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207199] Call Trace:
[12240.207201]  <TASK>
[12240.207202]  __schedule+0x425/0x1460
[12240.207207]  schedule+0x27/0xf0
[12240.207210]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207215]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207218]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207223]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207228]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207231]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207233]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207236]  ? bio_split_rw+0x141/0x2a0
[12240.207241]  md_handle_request+0x153/0x270 [md_mod]
[12240.207247]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207250]  __submit_bio+0x190/0x240
[12240.207254]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207257]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207260]  ? submit_bio_noacct+0x47/0x5b0
[12240.207263]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207266]  process_one_work+0x18f/0x3b0
[12240.207269]  worker_thread+0x21f/0x330
[12240.207270]  ? __pfx_worker_thread+0x10/0x10
[12240.207273]  ? __pfx_worker_thread+0x10/0x10
[12240.207275]  kthread+0xcd/0x100
[12240.207277]  ? __pfx_kthread+0x10/0x10
[12240.207280]  ret_from_fork+0x31/0x50
[12240.207282]  ? __pfx_kthread+0x10/0x10
[12240.207285]  ret_from_fork_asm+0x1a/0x30
[12240.207289]  </TASK>
[12240.207291] task:kworker/0:0     state:D stack:0     pid:9924  tgid:9924  ppid:2      flags:0x00004000
[12240.207294] Workqueue: xfs-sync/dm-4 xfs_log_worker [xfs]
[12240.207366] Call Trace:
[12240.207368]  <TASK>
[12240.207370]  __schedule+0x425/0x1460
[12240.207373]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207377]  schedule+0x27/0xf0
[12240.207380]  schedule_timeout+0x15d/0x170
[12240.207384]  __wait_for_common+0x90/0x1c0
[12240.207386]  ? __pfx_schedule_timeout+0x10/0x10
[12240.207390]  __flush_workqueue+0x158/0x440
[12240.207392]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207397]  xlog_cil_push_now.isra.0+0x5e/0xa0 [xfs]
[12240.207449]  xlog_cil_force_seq+0x69/0x240 [xfs]
[12240.207499]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207502]  ? __schedule+0x42d/0x1460
[12240.207506]  xfs_log_force+0x7a/0x230 [xfs]
[12240.207556]  xfs_log_worker+0x3d/0xc0 [xfs]
[12240.207605]  process_one_work+0x18f/0x3b0
[12240.207608]  worker_thread+0x21f/0x330
[12240.207610]  ? __pfx_worker_thread+0x10/0x10
[12240.207612]  ? __pfx_worker_thread+0x10/0x10
[12240.207615]  kthread+0xcd/0x100
[12240.207617]  ? __pfx_kthread+0x10/0x10
[12240.207620]  ret_from_fork+0x31/0x50
[12240.207623]  ? __pfx_kthread+0x10/0x10
[12240.207625]  ret_from_fork_asm+0x1a/0x30
[12240.207630]  </TASK>
[12240.207632] task:kworker/u132:23 state:D stack:0     pid:11661 tgid:11661 ppid:2      flags:0x00004000
[12240.207635] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207638] Call Trace:
[12240.207639]  <TASK>
[12240.207641]  __schedule+0x425/0x1460
[12240.207646]  schedule+0x27/0xf0
[12240.207649]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207656]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207659]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207664]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207669]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207672]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207675]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207677]  ? bio_split_rw+0x141/0x2a0
[12240.207682]  md_handle_request+0x153/0x270 [md_mod]
[12240.207689]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207692]  __submit_bio+0x190/0x240
[12240.207696]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207700]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207702]  ? submit_bio_noacct+0x47/0x5b0
[12240.207705]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207708]  process_one_work+0x18f/0x3b0
[12240.207711]  worker_thread+0x21f/0x330
[12240.207713]  ? __pfx_worker_thread+0x10/0x10
[12240.207716]  ? __pfx_worker_thread+0x10/0x10
[12240.207718]  kthread+0xcd/0x100
[12240.207720]  ? __pfx_kthread+0x10/0x10
[12240.207723]  ret_from_fork+0x31/0x50
[12240.207725]  ? __pfx_kthread+0x10/0x10
[12240.207728]  ret_from_fork_asm+0x1a/0x30
[12240.207732]  </TASK>
[12240.207735] task:kworker/u132:10 state:D stack:0     pid:14188 tgid:14188 ppid:2      flags:0x00004000
[12240.207738] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207741] Call Trace:
[12240.207742]  <TASK>
[12240.207744]  __schedule+0x425/0x1460
[12240.207749]  schedule+0x27/0xf0
[12240.207751]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207757]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207760]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207765]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207770]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207772]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207775]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207778]  ? bio_split_rw+0x141/0x2a0
[12240.207783]  md_handle_request+0x153/0x270 [md_mod]
[12240.207794]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207797]  __submit_bio+0x190/0x240
[12240.207801]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207804]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207806]  ? submit_bio_noacct+0x47/0x5b0
[12240.207809]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207813]  process_one_work+0x18f/0x3b0
[12240.207816]  worker_thread+0x21f/0x330
[12240.207817]  ? __pfx_worker_thread+0x10/0x10
[12240.207820]  ? __pfx_worker_thread+0x10/0x10
[12240.207822]  kthread+0xcd/0x100
[12240.207824]  ? __pfx_kthread+0x10/0x10
[12240.207827]  ret_from_fork+0x31/0x50
[12240.207829]  ? __pfx_kthread+0x10/0x10
[12240.207832]  ret_from_fork_asm+0x1a/0x30
[12240.207836]  </TASK>
[12240.207838] task:kworker/u132:21 state:D stack:0     pid:15639 tgid:15639 ppid:2      flags:0x00004000
[12240.207841] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.207843] Call Trace:
[12240.207844]  <TASK>
[12240.207846]  __schedule+0x425/0x1460
[12240.207848]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207853]  schedule+0x27/0xf0
[12240.207856]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.207862]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.207864]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.207869]  raid5_make_request+0x364/0x1290 [raid456]
[12240.207874]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207877]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.207879]  ? __pfx_woken_wake_function+0x10/0x10
[12240.207882]  ? bio_split_rw+0x141/0x2a0
[12240.207887]  md_handle_request+0x153/0x270 [md_mod]
[12240.207892]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207895]  __submit_bio+0x190/0x240
[12240.207899]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.207903]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.207905]  ? submit_bio_noacct+0x47/0x5b0
[12240.207908]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.207911]  process_one_work+0x18f/0x3b0
[12240.207914]  worker_thread+0x21f/0x330
[12240.207916]  ? __pfx_worker_thread+0x10/0x10
[12240.207918]  ? __pfx_worker_thread+0x10/0x10
[12240.207920]  kthread+0xcd/0x100
[12240.207922]  ? __pfx_kthread+0x10/0x10
[12240.207925]  ret_from_fork+0x31/0x50
[12240.207927]  ? __pfx_kthread+0x10/0x10
[12240.207930]  ret_from_fork_asm+0x1a/0x30
[12240.207934]  </TASK>
[12240.207942] task:rsync           state:D stack:0     pid:21424 tgid:21424 ppid:21415  flags:0x00000000
[12240.207945] Call Trace:
[12240.207946]  <TASK>
[12240.207948]  __schedule+0x425/0x1460
[12240.207950]  ? blk_mq_flush_plug_list.part.0+0x4a7/0x5a0
[12240.207956]  schedule+0x27/0xf0
[12240.207958]  schedule_timeout+0x15d/0x170
[12240.207962]  __wait_for_common+0x90/0x1c0
[12240.207964]  ? __pfx_schedule_timeout+0x10/0x10
[12240.207968]  xfs_buf_iowait+0x1c/0xc0 [xfs]
[12240.208035]  __xfs_buf_submit+0x132/0x1e0 [xfs]
[12240.208086]  xfs_buf_read_map+0x129/0x2a0 [xfs]
[12240.208136]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[12240.208207]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[12240.208275]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[12240.208335]  xfs_da_read_buf+0x106/0x180 [xfs]
[12240.208387]  __xfs_dir3_free_read+0x34/0x1a0 [xfs]
[12240.208449]  xfs_dir2_node_addname+0x4ba/0xa30 [xfs]
[12240.208500]  ? xfs_bmap_last_offset+0x98/0x140 [xfs]
[12240.208565]  xfs_dir_createname+0x129/0x160 [xfs]
[12240.208621]  ? xfs_trans_ichgtime+0x2f/0x90 [xfs]
[12240.208688]  xfs_dir_create_child+0x6a/0x150 [xfs]
[12240.208743]  ? xfs_diflags_to_iflags+0x12/0x50 [xfs]
[12240.208815]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.208819]  ? xfs_setup_inode+0x52/0x110 [xfs]
[12240.208869]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.208873]  xfs_create+0x3b3/0x490 [xfs]
[12240.208929]  xfs_generic_create+0x312/0x370 [xfs]
[12240.208981]  path_openat+0xf54/0x1210
[12240.208988]  do_filp_open+0xc4/0x170
[12240.208994]  do_sys_openat2+0xab/0xe0
[12240.208999]  __x64_sys_openat+0x57/0xa0
[12240.209002]  do_syscall_64+0xb7/0x200
[12240.209006]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[12240.209010] RIP: 0033:0x7f7c3a92be2f
[12240.209013] RSP: 002b:00007ffc2513e470 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[12240.209015] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f7c3a92be2f
[12240.209017] RDX: 00000000000000c2 RSI: 00007ffc25140740 RDI: 00000000ffffff9c
[12240.209018] RBP: 000000000003a2f8 R08: 002123761d6309f6 R09: 00007ffc2513e6ac
[12240.209019] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffc25140789
[12240.209021] R13: 00007ffc25140740 R14: 8421084210842109 R15: 00007f7c3a9c6a80
[12240.209025]  </TASK>
[12240.209027] task:kworker/u132:6  state:D stack:0     pid:23378 tgid:23378 ppid:2      flags:0x00004000
[12240.209030] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.209034] Call Trace:
[12240.209035]  <TASK>
[12240.209036]  __schedule+0x425/0x1460
[12240.209040]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209042]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
[12240.209049]  schedule+0x27/0xf0
[12240.209052]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.209058]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.209062]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.209067]  raid5_make_request+0x364/0x1290 [raid456]
[12240.209072]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209075]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.209077]  ? __pfx_woken_wake_function+0x10/0x10
[12240.209080]  ? bio_split_rw+0x141/0x2a0
[12240.209085]  md_handle_request+0x153/0x270 [md_mod]
[12240.209092]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209095]  __submit_bio+0x190/0x240
[12240.209099]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.209102]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209104]  ? submit_bio_noacct+0x47/0x5b0
[12240.209108]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.209111]  process_one_work+0x18f/0x3b0
[12240.209114]  worker_thread+0x21f/0x330
[12240.209115]  ? __pfx_worker_thread+0x10/0x10
[12240.209118]  ? __pfx_worker_thread+0x10/0x10
[12240.209120]  kthread+0xcd/0x100
[12240.209123]  ? __pfx_kthread+0x10/0x10
[12240.209125]  ret_from_fork+0x31/0x50
[12240.209128]  ? __pfx_kthread+0x10/0x10
[12240.209130]  ret_from_fork_asm+0x1a/0x30
[12240.209135]  </TASK>
[12240.209137] task:kworker/u132:29 state:D stack:0     pid:24814 tgid:24814 ppid:2      flags:0x00004000
[12240.209140] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[12240.209143] Call Trace:
[12240.209144]  <TASK>
[12240.209146]  __schedule+0x425/0x1460
[12240.209151]  schedule+0x27/0xf0
[12240.209154]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[12240.209159]  ? __pfx_autoremove_wake_function+0x10/0x10
[12240.209163]  __add_stripe_bio+0x1f4/0x240 [raid456]
[12240.209167]  raid5_make_request+0x364/0x1290 [raid456]
[12240.209172]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209175]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[12240.209178]  ? __pfx_woken_wake_function+0x10/0x10
[12240.209180]  ? bio_split_rw+0x141/0x2a0
[12240.209185]  md_handle_request+0x153/0x270 [md_mod]
[12240.209191]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209194]  __submit_bio+0x190/0x240
[12240.209198]  submit_bio_noacct_nocheck+0x19a/0x3c0
[12240.209201]  ? srso_alias_return_thunk+0x5/0xfbef5
[12240.209204]  ? submit_bio_noacct+0x47/0x5b0
[12240.209207]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[12240.209210]  process_one_work+0x18f/0x3b0
[12240.209213]  worker_thread+0x21f/0x330
[12240.209215]  ? __pfx_worker_thread+0x10/0x10
[12240.209217]  ? __pfx_worker_thread+0x10/0x10
[12240.209219]  kthread+0xcd/0x100
[12240.209222]  ? __pfx_kthread+0x10/0x10
[12240.209224]  ret_from_fork+0x31/0x50
[12240.209227]  ? __pfx_kthread+0x10/0x10
[12240.209229]  ret_from_fork_asm+0x1a/0x30
[12240.209233]  </TASK>


> On 4. Nov 2024, at 15:45, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi
> 
>> On 4. Nov 2024, at 13:18, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> 
>> I think I found a problem by code review, can you test the following
>> patch? (Noted this is still from latest mainline).
> 
> Thanks - I can try that. I think I got the gist of it and adapted it to 6.11.6:
> https://github.com/flyingcircusio/linux-stable/pull/2/files
> 
> (I’m not properly tooled to apply embedded patches from the mailing lists, I need to improve my toolchain if I have to deep dive this far more often in the future.)
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-04 20:04                                                                                 ` Christian Theune
@ 2024-11-05  1:20                                                                                   ` Yu Kuai
  2024-11-05  6:23                                                                                     ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-05  1:20 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/05 4:04, Christian Theune 写道:
> Hi,
> 
> unfortunately no joy. After 5000 seconds it got stuck again. Here’s the full list of tracebacks once more.

Thanks for the test, and sorry that I just realized my patch doesn't fix
the problem I noticed. :(

Please test again with the following patch on the top. I'm confident
that this is a real problem at least.

Thanks for the testing!
Kuai

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 04f32173839a..7ec6a5d2d166 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4042,7 +4042,7 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                              test_bit(R5_SkipCopy, &dev->flags))) {
                                 /* We can return any write requests */
                                 struct bio *wbi, *wbi2;
-                               bool written = false;
+                               bool written;

                                 pr_debug("Return write for disc %d\n", i);
                                 if (test_and_clear_bit(R5_Discard, 
&dev->flags))
@@ -4053,6 +4053,7 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                                 do_endio = true;

  returnbi:
+                               written = false;
                                 dev->page = dev->orig_page;
                                 wbi = dev->written;
                                 dev->written = NULL;

> 
> Happy to add more debugging code if you hand me a patch.
> 
> [12240.198494] sysrq: Show Blocked State
> [12240.202734] task:kworker/u132:0  state:D stack:0     pid:214   tgid:214   ppid:2      flags:0x00004000
> [12240.202740] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.202750] Call Trace:
> [12240.202752]  <TASK>
> [12240.202756]  __schedule+0x425/0x1460
> [12240.202763]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.202769]  schedule+0x27/0xf0
> [12240.202773]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.202782]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.202800]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.202807]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.202814]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.202816]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.202821]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.202825]  ? bio_split_rw+0x141/0x2a0
> [12240.202830]  md_handle_request+0x153/0x270 [md_mod]
> [12240.202838]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.202841]  __submit_bio+0x190/0x240
> [12240.202845]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.202848]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.202851]  ? submit_bio_noacct+0x47/0x5b0
> [12240.202854]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.202858]  process_one_work+0x18f/0x3b0
> [12240.202863]  worker_thread+0x21f/0x330
> [12240.202865]  ? __pfx_worker_thread+0x10/0x10
> [12240.202868]  kthread+0xcd/0x100
> [12240.202871]  ? __pfx_kthread+0x10/0x10
> [12240.202874]  ret_from_fork+0x31/0x50
> [12240.202878]  ? __pfx_kthread+0x10/0x10
> [12240.202881]  ret_from_fork_asm+0x1a/0x30
> [12240.202887]  </TASK>
> [12240.202939] task:kworker/u129:3  state:D stack:0     pid:436   tgid:436   ppid:2      flags:0x00004000
> [12240.202943] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
> [12240.203028] Call Trace:
> [12240.203029]  <TASK>
> [12240.203031]  __schedule+0x425/0x1460
> [12240.203034]  ? __blk_flush_plug+0xf5/0x150
> [12240.203040]  schedule+0x27/0xf0
> [12240.203042]  xlog_state_get_iclog_space+0x102/0x2b0 [xfs]
> [12240.203097]  ? __pfx_default_wake_function+0x10/0x10
> [12240.203101]  xlog_write_get_more_iclog_space+0xd0/0x100 [xfs]
> [12240.203152]  xlog_write+0x310/0x470 [xfs]
> [12240.203203]  xlog_cil_push_work+0x6a5/0x880 [xfs]
> [12240.203256]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203260]  process_one_work+0x18f/0x3b0
> [12240.203263]  worker_thread+0x21f/0x330
> [12240.203266]  ? __pfx_worker_thread+0x10/0x10
> [12240.203268]  kthread+0xcd/0x100
> [12240.203270]  ? __pfx_kthread+0x10/0x10
> [12240.203273]  ret_from_fork+0x31/0x50
> [12240.203276]  ? __pfx_kthread+0x10/0x10
> [12240.203278]  ret_from_fork_asm+0x1a/0x30
> [12240.203283]  </TASK>
> [12240.203289] task:kworker/u132:1  state:D stack:0     pid:490   tgid:490   ppid:2      flags:0x00004000
> [12240.203293] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203296] Call Trace:
> [12240.203298]  <TASK>
> [12240.203299]  __schedule+0x425/0x1460
> [12240.203302]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203305]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203307]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
> [12240.203313]  schedule+0x27/0xf0
> [12240.203316]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203323]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203326]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203331]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203336]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203338]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203342]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203345]  ? bio_split_rw+0x141/0x2a0
> [12240.203349]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203356]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203359]  __submit_bio+0x190/0x240
> [12240.203363]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203366]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203368]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203372]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203375]  process_one_work+0x18f/0x3b0
> [12240.203377]  worker_thread+0x21f/0x330
> [12240.203380]  ? __pfx_worker_thread+0x10/0x10
> [12240.203382]  kthread+0xcd/0x100
> [12240.203385]  ? __pfx_kthread+0x10/0x10
> [12240.203387]  ret_from_fork+0x31/0x50
> [12240.203390]  ? __pfx_kthread+0x10/0x10
> [12240.203392]  ret_from_fork_asm+0x1a/0x30
> [12240.203396]  </TASK>
> [12240.203405] task:kworker/u132:2  state:D stack:0     pid:504   tgid:504   ppid:2      flags:0x00004000
> [12240.203409] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203411] Call Trace:
> [12240.203412]  <TASK>
> [12240.203414]  __schedule+0x425/0x1460
> [12240.203419]  schedule+0x27/0xf0
> [12240.203422]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203427]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203430]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203435]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203440]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203443]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203446]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203448]  ? bio_split_rw+0x141/0x2a0
> [12240.203453]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203459]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203462]  __submit_bio+0x190/0x240
> [12240.203466]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203469]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203472]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203475]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203478]  process_one_work+0x18f/0x3b0
> [12240.203481]  worker_thread+0x21f/0x330
> [12240.203483]  ? __pfx_worker_thread+0x10/0x10
> [12240.203485]  ? __pfx_worker_thread+0x10/0x10
> [12240.203487]  kthread+0xcd/0x100
> [12240.203489]  ? __pfx_kthread+0x10/0x10
> [12240.203492]  ret_from_fork+0x31/0x50
> [12240.203495]  ? __pfx_kthread+0x10/0x10
> [12240.203497]  ret_from_fork_asm+0x1a/0x30
> [12240.203501]  </TASK>
> [12240.203557] task:kworker/u132:3  state:D stack:0     pid:1731  tgid:1731  ppid:2      flags:0x00004000
> [12240.203561] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203563] Call Trace:
> [12240.203565]  <TASK>
> [12240.203567]  __schedule+0x425/0x1460
> [12240.203572]  schedule+0x27/0xf0
> [12240.203575]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203580]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203583]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203588]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203593]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203595]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203598]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203600]  ? bio_split_rw+0x141/0x2a0
> [12240.203605]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203611]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203614]  __submit_bio+0x190/0x240
> [12240.203618]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203621]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203624]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203627]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203630]  process_one_work+0x18f/0x3b0
> [12240.203633]  worker_thread+0x21f/0x330
> [12240.203635]  ? __pfx_worker_thread+0x10/0x10
> [12240.203637]  ? __pfx_worker_thread+0x10/0x10
> [12240.203639]  kthread+0xcd/0x100
> [12240.203641]  ? __pfx_kthread+0x10/0x10
> [12240.203644]  ret_from_fork+0x31/0x50
> [12240.203646]  ? __pfx_kthread+0x10/0x10
> [12240.203649]  ret_from_fork_asm+0x1a/0x30
> [12240.203653]  </TASK>
> [12240.203682] task:kworker/u132:4  state:D stack:0     pid:2131  tgid:2131  ppid:2      flags:0x00004000
> [12240.203686] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203689] Call Trace:
> [12240.203690]  <TASK>
> [12240.203691]  __schedule+0x425/0x1460
> [12240.203697]  schedule+0x27/0xf0
> [12240.203699]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203705]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203707]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203712]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203718]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203720]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203723]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203725]  ? bio_split_rw+0x141/0x2a0
> [12240.203730]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203736]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203739]  __submit_bio+0x190/0x240
> [12240.203743]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203746]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203749]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203752]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203754]  process_one_work+0x18f/0x3b0
> [12240.203758]  worker_thread+0x21f/0x330
> [12240.203760]  ? __pfx_worker_thread+0x10/0x10
> [12240.203762]  ? __pfx_worker_thread+0x10/0x10
> [12240.203764]  kthread+0xcd/0x100
> [12240.203766]  ? __pfx_kthread+0x10/0x10
> [12240.203769]  ret_from_fork+0x31/0x50
> [12240.203771]  ? __pfx_kthread+0x10/0x10
> [12240.203774]  ret_from_fork_asm+0x1a/0x30
> [12240.203778]  </TASK>
> [12240.203780] task:kworker/u132:5  state:D stack:0     pid:2132  tgid:2132  ppid:2      flags:0x00004000
> [12240.203784] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203793] Call Trace:
> [12240.203794]  <TASK>
> [12240.203795]  __schedule+0x425/0x1460
> [12240.203800]  schedule+0x27/0xf0
> [12240.203803]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203808]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203812]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203817]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203821]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203824]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203827]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203829]  ? bio_split_rw+0x141/0x2a0
> [12240.203834]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203840]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203843]  __submit_bio+0x190/0x240
> [12240.203847]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203850]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203853]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203856]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203859]  process_one_work+0x18f/0x3b0
> [12240.203862]  worker_thread+0x21f/0x330
> [12240.203864]  ? __pfx_worker_thread+0x10/0x10
> [12240.203866]  kthread+0xcd/0x100
> [12240.203869]  ? __pfx_kthread+0x10/0x10
> [12240.203871]  ret_from_fork+0x31/0x50
> [12240.203874]  ? __pfx_kthread+0x10/0x10
> [12240.203876]  ret_from_fork_asm+0x1a/0x30
> [12240.203880]  </TASK>
> [12240.203882] task:kworker/u132:7  state:D stack:0     pid:2134  tgid:2134  ppid:2      flags:0x00004000
> [12240.203885] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203888] Call Trace:
> [12240.203889]  <TASK>
> [12240.203891]  __schedule+0x425/0x1460
> [12240.203893]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203898]  schedule+0x27/0xf0
> [12240.203901]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.203906]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.203909]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.203914]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.203919]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203922]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.203925]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.203927]  ? bio_split_rw+0x141/0x2a0
> [12240.203932]  md_handle_request+0x153/0x270 [md_mod]
> [12240.203938]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203941]  __submit_bio+0x190/0x240
> [12240.203945]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.203948]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.203950]  ? submit_bio_noacct+0x47/0x5b0
> [12240.203953]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.203956]  process_one_work+0x18f/0x3b0
> [12240.203959]  worker_thread+0x21f/0x330
> [12240.203961]  ? __pfx_worker_thread+0x10/0x10
> [12240.203964]  kthread+0xcd/0x100
> [12240.203966]  ? __pfx_kthread+0x10/0x10
> [12240.203969]  ret_from_fork+0x31/0x50
> [12240.203971]  ? __pfx_kthread+0x10/0x10
> [12240.203974]  ret_from_fork_asm+0x1a/0x30
> [12240.203978]  </TASK>
> [12240.203981] task:kworker/u132:9  state:D stack:0     pid:2136  tgid:2136  ppid:2      flags:0x00004000
> [12240.203984] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.203987] Call Trace:
> [12240.203988]  <TASK>
> [12240.203989]  __schedule+0x425/0x1460
> [12240.203995]  schedule+0x27/0xf0
> [12240.203997]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204003]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204006]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204011]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204016]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204018]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204021]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204023]  ? bio_split_rw+0x141/0x2a0
> [12240.204028]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204034]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204037]  __submit_bio+0x190/0x240
> [12240.204041]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204044]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204046]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204050]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204052]  process_one_work+0x18f/0x3b0
> [12240.204055]  worker_thread+0x21f/0x330
> [12240.204058]  ? __pfx_worker_thread+0x10/0x10
> [12240.204060]  kthread+0xcd/0x100
> [12240.204062]  ? __pfx_kthread+0x10/0x10
> [12240.204065]  ret_from_fork+0x31/0x50
> [12240.204067]  ? __pfx_kthread+0x10/0x10
> [12240.204069]  ret_from_fork_asm+0x1a/0x30
> [12240.204074]  </TASK>
> [12240.204076] task:kworker/u132:11 state:D stack:0     pid:2138  tgid:2138  ppid:2      flags:0x00004000
> [12240.204078] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.204081] Call Trace:
> [12240.204082]  <TASK>
> [12240.204084]  __schedule+0x425/0x1460
> [12240.204089]  schedule+0x27/0xf0
> [12240.204092]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204097]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204100]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204105]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204110]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204113]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204116]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204119]  ? bio_split_rw+0x141/0x2a0
> [12240.204123]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204129]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204132]  __submit_bio+0x190/0x240
> [12240.204136]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204139]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204142]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204145]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204148]  process_one_work+0x18f/0x3b0
> [12240.204151]  worker_thread+0x21f/0x330
> [12240.204154]  ? __pfx_worker_thread+0x10/0x10
> [12240.204156]  kthread+0xcd/0x100
> [12240.204159]  ? __pfx_kthread+0x10/0x10
> [12240.204161]  ret_from_fork+0x31/0x50
> [12240.204164]  ? __pfx_kthread+0x10/0x10
> [12240.204166]  ret_from_fork_asm+0x1a/0x30
> [12240.204171]  </TASK>
> [12240.204172] task:kworker/u132:12 state:D stack:0     pid:2139  tgid:2139  ppid:2      flags:0x00004000
> [12240.204175] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.204178] Call Trace:
> [12240.204179]  <TASK>
> [12240.204180]  __schedule+0x425/0x1460
> [12240.204186]  schedule+0x27/0xf0
> [12240.204188]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204194]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204197]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204202]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204207]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204209]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204212]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204215]  ? bio_split_rw+0x141/0x2a0
> [12240.204220]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204226]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204229]  __submit_bio+0x190/0x240
> [12240.204233]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204235]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204238]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204241]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204244]  process_one_work+0x18f/0x3b0
> [12240.204247]  worker_thread+0x21f/0x330
> [12240.204249]  ? __pfx_worker_thread+0x10/0x10
> [12240.204251]  kthread+0xcd/0x100
> [12240.204254]  ? __pfx_kthread+0x10/0x10
> [12240.204257]  ret_from_fork+0x31/0x50
> [12240.204259]  ? __pfx_kthread+0x10/0x10
> [12240.204261]  ret_from_fork_asm+0x1a/0x30
> [12240.204266]  </TASK>
> [12240.204267] task:kworker/u132:14 state:D stack:0     pid:2141  tgid:2141  ppid:2      flags:0x00004000
> [12240.204270] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.204273] Call Trace:
> [12240.204274]  <TASK>
> [12240.204276]  __schedule+0x425/0x1460
> [12240.204278]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204283]  schedule+0x27/0xf0
> [12240.204286]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204291]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204294]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204299]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204304]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204306]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204309]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204312]  ? bio_split_rw+0x141/0x2a0
> [12240.204317]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204322]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204326]  __submit_bio+0x190/0x240
> [12240.204330]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204333]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204335]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204338]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204341]  process_one_work+0x18f/0x3b0
> [12240.204344]  worker_thread+0x21f/0x330
> [12240.204346]  ? __pfx_worker_thread+0x10/0x10
> [12240.204348]  kthread+0xcd/0x100
> [12240.204351]  ? __pfx_kthread+0x10/0x10
> [12240.204354]  ret_from_fork+0x31/0x50
> [12240.204356]  ? __pfx_kthread+0x10/0x10
> [12240.204358]  ret_from_fork_asm+0x1a/0x30
> [12240.204363]  </TASK>
> [12240.204364] task:kworker/u132:15 state:D stack:0     pid:2142  tgid:2142  ppid:2      flags:0x00004000
> [12240.204367] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.204370] Call Trace:
> [12240.204371]  <TASK>
> [12240.204372]  __schedule+0x425/0x1460
> [12240.204377]  schedule+0x27/0xf0
> [12240.204380]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204385]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204388]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204393]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204398]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204401]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204403]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204406]  ? bio_split_rw+0x141/0x2a0
> [12240.204410]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204416]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204419]  __submit_bio+0x190/0x240
> [12240.204424]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204427]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204429]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204432]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204435]  process_one_work+0x18f/0x3b0
> [12240.204438]  worker_thread+0x21f/0x330
> [12240.204441]  ? __pfx_worker_thread+0x10/0x10
> [12240.204443]  kthread+0xcd/0x100
> [12240.204445]  ? __pfx_kthread+0x10/0x10
> [12240.204448]  ret_from_fork+0x31/0x50
> [12240.204450]  ? __pfx_kthread+0x10/0x10
> [12240.204453]  ret_from_fork_asm+0x1a/0x30
> [12240.204457]  </TASK>
> [12240.204458] task:kworker/u132:16 state:D stack:0     pid:2143  tgid:2143  ppid:2      flags:0x00004000
> [12240.204461] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.204464] Call Trace:
> [12240.204465]  <TASK>
> [12240.204466]  __schedule+0x425/0x1460
> [12240.204471]  schedule+0x27/0xf0
> [12240.204474]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.204480]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.204482]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.204487]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.204493]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204495]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.204497]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.204500]  ? bio_split_rw+0x141/0x2a0
> [12240.204505]  md_handle_request+0x153/0x270 [md_mod]
> [12240.204511]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204514]  __submit_bio+0x190/0x240
> [12240.204518]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.204521]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.204524]  ? submit_bio_noacct+0x47/0x5b0
> [12240.204527]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.204530]  process_one_work+0x18f/0x3b0
> [12240.204532]  worker_thread+0x21f/0x330
> [12240.204535]  ? __pfx_worker_thread+0x10/0x10
> [12240.204537]  kthread+0xcd/0x100
> [12240.204540]  ? __pfx_kthread+0x10/0x10
> [12240.204542]  ret_from_fork+0x31/0x50
> [12240.204544]  ? __pfx_kthread+0x10/0x10
> [12240.204547]  ret_from_fork_asm+0x1a/0x30
> [12240.204551]  </TASK>
> [12240.204553] task:kworker/u132:17 state:D stack:0     pid:2144  tgid:2144  ppid:2      flags:0x00004000
> [12240.204556] Workqueue: writeback wb_workfn (flush-254:4)
> [12240.204560] Call Trace:
> [12240.204561]  <TASK>
> [12240.204563]  __schedule+0x425/0x1460
> [12240.204568]  schedule+0x27/0xf0
> [12240.204570]  schedule_timeout+0x15d/0x170
> [12240.204575]  __wait_for_common+0x90/0x1c0
> [12240.204577]  ? __pfx_schedule_timeout+0x10/0x10
> [12240.204581]  xfs_buf_iowait+0x1c/0xc0 [xfs]
> [12240.204655]  __xfs_buf_submit+0x132/0x1e0 [xfs]
> [12240.204708]  xfs_buf_read_map+0x129/0x2a0 [xfs]
> [12240.204761]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [12240.204837]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [12240.204910]  ? xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [12240.204972]  xfs_btree_read_buf_block+0xa7/0x120 [xfs]
> [12240.205025]  xfs_btree_lookup_get_block+0xa6/0x1f0 [xfs]
> [12240.205077]  xfs_btree_lookup+0xea/0x500 [xfs]
> [12240.205129]  xfs_alloc_fixup_trees+0x70/0x520 [xfs]
> [12240.205195]  xfs_alloc_cur_finish+0x2b/0xa0 [xfs]
> [12240.205245]  xfs_alloc_ag_vextent_near+0x437/0x540 [xfs]
> [12240.205297]  xfs_alloc_vextent_iterate_ags.constprop.0+0xc8/0x200 [xfs]
> [12240.205347]  ? xfs_buf_item_format+0x1b8/0x450 [xfs]
> [12240.205416]  xfs_alloc_vextent_start_ag+0xc0/0x190 [xfs]
> [12240.205471]  xfs_bmap_btalloc+0x4dd/0x640 [xfs]
> [12240.205535]  xfs_bmapi_allocate+0xac/0x2c0 [xfs]
> [12240.205585]  xfs_bmapi_convert_one_delalloc+0x1f6/0x430 [xfs]
> [12240.205639]  xfs_bmapi_convert_delalloc+0x43/0x60 [xfs]
> [12240.205689]  xfs_map_blocks+0x257/0x420 [xfs]
> [12240.205762]  iomap_writepages+0x271/0x9b0
> [12240.205768]  xfs_vm_writepages+0x67/0x90 [xfs]
> [12240.205826]  do_writepages+0x76/0x260
> [12240.205832]  __writeback_single_inode+0x3d/0x350
> [12240.205836]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205838]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205842]  writeback_sb_inodes+0x21c/0x4e0
> [12240.205853]  __writeback_inodes_wb+0x4c/0xf0
> [12240.205856]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205859]  wb_writeback+0x193/0x310
> [12240.205864]  wb_workfn+0x357/0x450
> [12240.205868]  process_one_work+0x18f/0x3b0
> [12240.205871]  worker_thread+0x21f/0x330
> [12240.205874]  ? __pfx_worker_thread+0x10/0x10
> [12240.205876]  kthread+0xcd/0x100
> [12240.205878]  ? __pfx_kthread+0x10/0x10
> [12240.205881]  ret_from_fork+0x31/0x50
> [12240.205884]  ? __pfx_kthread+0x10/0x10
> [12240.205886]  ret_from_fork_asm+0x1a/0x30
> [12240.205891]  </TASK>
> [12240.205893] task:kworker/u132:18 state:D stack:0     pid:2145  tgid:2145  ppid:2      flags:0x00004000
> [12240.205896] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.205899] Call Trace:
> [12240.205900]  <TASK>
> [12240.205901]  __schedule+0x425/0x1460
> [12240.205907]  schedule+0x27/0xf0
> [12240.205910]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.205916]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.205919]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.205925]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.205930]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205933]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.205936]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.205939]  ? bio_split_rw+0x141/0x2a0
> [12240.205943]  md_handle_request+0x153/0x270 [md_mod]
> [12240.205950]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205953]  __submit_bio+0x190/0x240
> [12240.205957]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.205961]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.205963]  ? submit_bio_noacct+0x47/0x5b0
> [12240.205966]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.205970]  process_one_work+0x18f/0x3b0
> [12240.205972]  worker_thread+0x21f/0x330
> [12240.205975]  ? __pfx_worker_thread+0x10/0x10
> [12240.205977]  kthread+0xcd/0x100
> [12240.205980]  ? __pfx_kthread+0x10/0x10
> [12240.205982]  ret_from_fork+0x31/0x50
> [12240.205985]  ? __pfx_kthread+0x10/0x10
> [12240.205987]  ret_from_fork_asm+0x1a/0x30
> [12240.205992]  </TASK>
> [12240.205993] task:kworker/u132:19 state:D stack:0     pid:2146  tgid:2146  ppid:2      flags:0x00004000
> [12240.205996] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.205998] Call Trace:
> [12240.205999]  <TASK>
> [12240.206001]  __schedule+0x425/0x1460
> [12240.206007]  schedule+0x27/0xf0
> [12240.206009]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206015]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206018]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206022]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206028]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206030]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206033]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206035]  ? bio_split_rw+0x141/0x2a0
> [12240.206040]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206046]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206050]  __submit_bio+0x190/0x240
> [12240.206053]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206057]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206059]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206062]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206065]  process_one_work+0x18f/0x3b0
> [12240.206068]  worker_thread+0x21f/0x330
> [12240.206071]  ? __pfx_worker_thread+0x10/0x10
> [12240.206073]  kthread+0xcd/0x100
> [12240.206075]  ? __pfx_kthread+0x10/0x10
> [12240.206078]  ret_from_fork+0x31/0x50
> [12240.206080]  ? __pfx_kthread+0x10/0x10
> [12240.206083]  ret_from_fork_asm+0x1a/0x30
> [12240.206087]  </TASK>
> [12240.206089] task:kworker/u132:20 state:D stack:0     pid:2147  tgid:2147  ppid:2      flags:0x00004000
> [12240.206091] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206094] Call Trace:
> [12240.206095]  <TASK>
> [12240.206097]  __schedule+0x425/0x1460
> [12240.206102]  schedule+0x27/0xf0
> [12240.206105]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206110]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206113]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206118]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206123]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206126]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206128]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206131]  ? bio_split_rw+0x141/0x2a0
> [12240.206136]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206142]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206145]  __submit_bio+0x190/0x240
> [12240.206149]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206152]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206155]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206158]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206161]  process_one_work+0x18f/0x3b0
> [12240.206163]  worker_thread+0x21f/0x330
> [12240.206166]  ? __pfx_worker_thread+0x10/0x10
> [12240.206168]  kthread+0xcd/0x100
> [12240.206171]  ? __pfx_kthread+0x10/0x10
> [12240.206173]  ret_from_fork+0x31/0x50
> [12240.206175]  ? __pfx_kthread+0x10/0x10
> [12240.206178]  ret_from_fork_asm+0x1a/0x30
> [12240.206182]  </TASK>
> [12240.206184] task:kworker/u132:22 state:D stack:0     pid:2149  tgid:2149  ppid:2      flags:0x00004000
> [12240.206187] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206189] Call Trace:
> [12240.206190]  <TASK>
> [12240.206192]  __schedule+0x425/0x1460
> [12240.206197]  schedule+0x27/0xf0
> [12240.206199]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206205]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206208]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206213]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206218]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206220]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206223]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206226]  ? bio_split_rw+0x141/0x2a0
> [12240.206230]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206236]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206240]  __submit_bio+0x190/0x240
> [12240.206243]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206246]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206249]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206252]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206255]  process_one_work+0x18f/0x3b0
> [12240.206258]  worker_thread+0x21f/0x330
> [12240.206260]  ? __pfx_worker_thread+0x10/0x10
> [12240.206262]  kthread+0xcd/0x100
> [12240.206265]  ? __pfx_kthread+0x10/0x10
> [12240.206268]  ret_from_fork+0x31/0x50
> [12240.206270]  ? __pfx_kthread+0x10/0x10
> [12240.206272]  ret_from_fork_asm+0x1a/0x30
> [12240.206277]  </TASK>
> [12240.206279] task:kworker/u132:24 state:D stack:0     pid:2151  tgid:2151  ppid:2      flags:0x00004000
> [12240.206281] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206284] Call Trace:
> [12240.206284]  <TASK>
> [12240.206286]  __schedule+0x425/0x1460
> [12240.206289]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206294]  schedule+0x27/0xf0
> [12240.206296]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206302]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206305]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206310]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206315]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206318]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206320]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206323]  ? bio_split_rw+0x141/0x2a0
> [12240.206327]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206333]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206336]  __submit_bio+0x190/0x240
> [12240.206340]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206344]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206346]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206349]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206352]  process_one_work+0x18f/0x3b0
> [12240.206355]  worker_thread+0x21f/0x330
> [12240.206357]  ? __pfx_worker_thread+0x10/0x10
> [12240.206359]  kthread+0xcd/0x100
> [12240.206362]  ? __pfx_kthread+0x10/0x10
> [12240.206365]  ret_from_fork+0x31/0x50
> [12240.206367]  ? __pfx_kthread+0x10/0x10
> [12240.206370]  ret_from_fork_asm+0x1a/0x30
> [12240.206374]  </TASK>
> [12240.206376] task:kworker/u132:25 state:D stack:0     pid:2152  tgid:2152  ppid:2      flags:0x00004000
> [12240.206379] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206381] Call Trace:
> [12240.206382]  <TASK>
> [12240.206384]  __schedule+0x425/0x1460
> [12240.206389]  schedule+0x27/0xf0
> [12240.206391]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206397]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206400]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206405]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206410]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206412]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206415]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206418]  ? bio_split_rw+0x141/0x2a0
> [12240.206422]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206428]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206432]  __submit_bio+0x190/0x240
> [12240.206436]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206438]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206441]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206444]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206447]  process_one_work+0x18f/0x3b0
> [12240.206450]  worker_thread+0x21f/0x330
> [12240.206452]  ? __pfx_worker_thread+0x10/0x10
> [12240.206454]  kthread+0xcd/0x100
> [12240.206457]  ? __pfx_kthread+0x10/0x10
> [12240.206460]  ret_from_fork+0x31/0x50
> [12240.206462]  ? __pfx_kthread+0x10/0x10
> [12240.206464]  ret_from_fork_asm+0x1a/0x30
> [12240.206469]  </TASK>
> [12240.206470] task:kworker/u132:26 state:D stack:0     pid:2153  tgid:2153  ppid:2      flags:0x00004000
> [12240.206473] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206475] Call Trace:
> [12240.206476]  <TASK>
> [12240.206479]  __schedule+0x425/0x1460
> [12240.206484]  schedule+0x27/0xf0
> [12240.206486]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206492]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206495]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206499]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206504]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206507]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206510]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206512]  ? bio_split_rw+0x141/0x2a0
> [12240.206517]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206523]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206526]  __submit_bio+0x190/0x240
> [12240.206530]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206533]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206536]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206539]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206541]  process_one_work+0x18f/0x3b0
> [12240.206544]  worker_thread+0x21f/0x330
> [12240.206547]  ? __pfx_worker_thread+0x10/0x10
> [12240.206549]  kthread+0xcd/0x100
> [12240.206551]  ? __pfx_kthread+0x10/0x10
> [12240.206554]  ret_from_fork+0x31/0x50
> [12240.206556]  ? __pfx_kthread+0x10/0x10
> [12240.206559]  ret_from_fork_asm+0x1a/0x30
> [12240.206563]  </TASK>
> [12240.206565] task:kworker/u132:27 state:D stack:0     pid:2154  tgid:2154  ppid:2      flags:0x00004000
> [12240.206567] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206570] Call Trace:
> [12240.206570]  <TASK>
> [12240.206572]  __schedule+0x425/0x1460
> [12240.206575]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206580]  schedule+0x27/0xf0
> [12240.206582]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206588]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206590]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206595]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206600]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206603]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206605]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206608]  ? bio_split_rw+0x141/0x2a0
> [12240.206613]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206619]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206622]  __submit_bio+0x190/0x240
> [12240.206626]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206629]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206631]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206635]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206638]  process_one_work+0x18f/0x3b0
> [12240.206640]  worker_thread+0x21f/0x330
> [12240.206643]  ? __pfx_worker_thread+0x10/0x10
> [12240.206645]  kthread+0xcd/0x100
> [12240.206647]  ? __pfx_kthread+0x10/0x10
> [12240.206650]  ret_from_fork+0x31/0x50
> [12240.206652]  ? __pfx_kthread+0x10/0x10
> [12240.206655]  ret_from_fork_asm+0x1a/0x30
> [12240.206659]  </TASK>
> [12240.206661] task:kworker/u132:28 state:D stack:0     pid:2155  tgid:2155  ppid:2      flags:0x00004000
> [12240.206663] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206666] Call Trace:
> [12240.206667]  <TASK>
> [12240.206668]  __schedule+0x425/0x1460
> [12240.206673]  schedule+0x27/0xf0
> [12240.206676]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206681]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206684]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206689]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206694]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206697]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206699]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206702]  ? bio_split_rw+0x141/0x2a0
> [12240.206707]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206713]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206716]  __submit_bio+0x190/0x240
> [12240.206720]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206722]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206725]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206728]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206731]  process_one_work+0x18f/0x3b0
> [12240.206734]  worker_thread+0x21f/0x330
> [12240.206736]  ? __pfx_worker_thread+0x10/0x10
> [12240.206738]  kthread+0xcd/0x100
> [12240.206741]  ? __pfx_kthread+0x10/0x10
> [12240.206744]  ret_from_fork+0x31/0x50
> [12240.206746]  ? __pfx_kthread+0x10/0x10
> [12240.206748]  ret_from_fork_asm+0x1a/0x30
> [12240.206753]  </TASK>
> [12240.206754] task:kworker/u132:30 state:D stack:0     pid:2157  tgid:2157  ppid:2      flags:0x00004000
> [12240.206756] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206759] Call Trace:
> [12240.206760]  <TASK>
> [12240.206762]  __schedule+0x425/0x1460
> [12240.206767]  schedule+0x27/0xf0
> [12240.206769]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206775]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206778]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206783]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206793]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206796]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206798]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206801]  ? bio_split_rw+0x141/0x2a0
> [12240.206806]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206813]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206816]  __submit_bio+0x190/0x240
> [12240.206821]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206824]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206826]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206829]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206833]  process_one_work+0x18f/0x3b0
> [12240.206836]  worker_thread+0x21f/0x330
> [12240.206838]  ? __pfx_worker_thread+0x10/0x10
> [12240.206840]  kthread+0xcd/0x100
> [12240.206843]  ? __pfx_kthread+0x10/0x10
> [12240.206845]  ret_from_fork+0x31/0x50
> [12240.206848]  ? __pfx_kthread+0x10/0x10
> [12240.206850]  ret_from_fork_asm+0x1a/0x30
> [12240.206854]  </TASK>
> [12240.206898] task:kworker/u132:34 state:D stack:0     pid:3394  tgid:3394  ppid:2      flags:0x00004000
> [12240.206901] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.206904] Call Trace:
> [12240.206905]  <TASK>
> [12240.206907]  __schedule+0x425/0x1460
> [12240.206910]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206912]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
> [12240.206918]  schedule+0x27/0xf0
> [12240.206921]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.206926]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.206929]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.206934]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.206939]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206942]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.206945]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.206947]  ? bio_split_rw+0x141/0x2a0
> [12240.206952]  md_handle_request+0x153/0x270 [md_mod]
> [12240.206958]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206961]  __submit_bio+0x190/0x240
> [12240.206965]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.206968]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.206971]  ? submit_bio_noacct+0x47/0x5b0
> [12240.206974]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.206977]  process_one_work+0x18f/0x3b0
> [12240.206979]  worker_thread+0x21f/0x330
> [12240.206982]  ? __pfx_worker_thread+0x10/0x10
> [12240.206984]  ? __pfx_worker_thread+0x10/0x10
> [12240.206986]  kthread+0xcd/0x100
> [12240.206988]  ? __pfx_kthread+0x10/0x10
> [12240.206991]  ret_from_fork+0x31/0x50
> [12240.206994]  ? __pfx_kthread+0x10/0x10
> [12240.206996]  ret_from_fork_asm+0x1a/0x30
> [12240.207000]  </TASK>
> [12240.207002] task:kworker/u132:33 state:D stack:0     pid:5300  tgid:5300  ppid:2      flags:0x00004000
> [12240.207005] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207008] Call Trace:
> [12240.207009]  <TASK>
> [12240.207010]  __schedule+0x425/0x1460
> [12240.207015]  schedule+0x27/0xf0
> [12240.207018]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207024]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207027]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207032]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207037]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207040]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207042]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207045]  ? bio_split_rw+0x141/0x2a0
> [12240.207050]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207056]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207059]  __submit_bio+0x190/0x240
> [12240.207063]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207066]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207068]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207071]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207074]  process_one_work+0x18f/0x3b0
> [12240.207077]  worker_thread+0x21f/0x330
> [12240.207080]  ? __pfx_worker_thread+0x10/0x10
> [12240.207082]  kthread+0xcd/0x100
> [12240.207084]  ? __pfx_kthread+0x10/0x10
> [12240.207087]  ret_from_fork+0x31/0x50
> [12240.207089]  ? __pfx_kthread+0x10/0x10
> [12240.207092]  ret_from_fork_asm+0x1a/0x30
> [12240.207096]  </TASK>
> [12240.207098] task:kworker/u132:35 state:D stack:0     pid:6467  tgid:6467  ppid:2      flags:0x00004000
> [12240.207100] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207103] Call Trace:
> [12240.207104]  <TASK>
> [12240.207105]  __schedule+0x425/0x1460
> [12240.207111]  schedule+0x27/0xf0
> [12240.207113]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207119]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207122]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207126]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207131]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207134]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207137]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207139]  ? bio_split_rw+0x141/0x2a0
> [12240.207144]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207150]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207153]  __submit_bio+0x190/0x240
> [12240.207157]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207160]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207163]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207166]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207169]  process_one_work+0x18f/0x3b0
> [12240.207172]  worker_thread+0x21f/0x330
> [12240.207174]  ? __pfx_worker_thread+0x10/0x10
> [12240.207176]  ? __pfx_worker_thread+0x10/0x10
> [12240.207178]  kthread+0xcd/0x100
> [12240.207180]  ? __pfx_kthread+0x10/0x10
> [12240.207184]  ret_from_fork+0x31/0x50
> [12240.207186]  ? __pfx_kthread+0x10/0x10
> [12240.207188]  ret_from_fork_asm+0x1a/0x30
> [12240.207193]  </TASK>
> [12240.207194] task:kworker/u132:13 state:D stack:0     pid:9685  tgid:9685  ppid:2      flags:0x00004000
> [12240.207197] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207199] Call Trace:
> [12240.207201]  <TASK>
> [12240.207202]  __schedule+0x425/0x1460
> [12240.207207]  schedule+0x27/0xf0
> [12240.207210]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207215]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207218]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207223]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207228]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207231]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207233]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207236]  ? bio_split_rw+0x141/0x2a0
> [12240.207241]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207247]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207250]  __submit_bio+0x190/0x240
> [12240.207254]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207257]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207260]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207263]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207266]  process_one_work+0x18f/0x3b0
> [12240.207269]  worker_thread+0x21f/0x330
> [12240.207270]  ? __pfx_worker_thread+0x10/0x10
> [12240.207273]  ? __pfx_worker_thread+0x10/0x10
> [12240.207275]  kthread+0xcd/0x100
> [12240.207277]  ? __pfx_kthread+0x10/0x10
> [12240.207280]  ret_from_fork+0x31/0x50
> [12240.207282]  ? __pfx_kthread+0x10/0x10
> [12240.207285]  ret_from_fork_asm+0x1a/0x30
> [12240.207289]  </TASK>
> [12240.207291] task:kworker/0:0     state:D stack:0     pid:9924  tgid:9924  ppid:2      flags:0x00004000
> [12240.207294] Workqueue: xfs-sync/dm-4 xfs_log_worker [xfs]
> [12240.207366] Call Trace:
> [12240.207368]  <TASK>
> [12240.207370]  __schedule+0x425/0x1460
> [12240.207373]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207377]  schedule+0x27/0xf0
> [12240.207380]  schedule_timeout+0x15d/0x170
> [12240.207384]  __wait_for_common+0x90/0x1c0
> [12240.207386]  ? __pfx_schedule_timeout+0x10/0x10
> [12240.207390]  __flush_workqueue+0x158/0x440
> [12240.207392]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207397]  xlog_cil_push_now.isra.0+0x5e/0xa0 [xfs]
> [12240.207449]  xlog_cil_force_seq+0x69/0x240 [xfs]
> [12240.207499]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207502]  ? __schedule+0x42d/0x1460
> [12240.207506]  xfs_log_force+0x7a/0x230 [xfs]
> [12240.207556]  xfs_log_worker+0x3d/0xc0 [xfs]
> [12240.207605]  process_one_work+0x18f/0x3b0
> [12240.207608]  worker_thread+0x21f/0x330
> [12240.207610]  ? __pfx_worker_thread+0x10/0x10
> [12240.207612]  ? __pfx_worker_thread+0x10/0x10
> [12240.207615]  kthread+0xcd/0x100
> [12240.207617]  ? __pfx_kthread+0x10/0x10
> [12240.207620]  ret_from_fork+0x31/0x50
> [12240.207623]  ? __pfx_kthread+0x10/0x10
> [12240.207625]  ret_from_fork_asm+0x1a/0x30
> [12240.207630]  </TASK>
> [12240.207632] task:kworker/u132:23 state:D stack:0     pid:11661 tgid:11661 ppid:2      flags:0x00004000
> [12240.207635] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207638] Call Trace:
> [12240.207639]  <TASK>
> [12240.207641]  __schedule+0x425/0x1460
> [12240.207646]  schedule+0x27/0xf0
> [12240.207649]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207656]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207659]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207664]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207669]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207672]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207675]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207677]  ? bio_split_rw+0x141/0x2a0
> [12240.207682]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207689]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207692]  __submit_bio+0x190/0x240
> [12240.207696]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207700]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207702]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207705]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207708]  process_one_work+0x18f/0x3b0
> [12240.207711]  worker_thread+0x21f/0x330
> [12240.207713]  ? __pfx_worker_thread+0x10/0x10
> [12240.207716]  ? __pfx_worker_thread+0x10/0x10
> [12240.207718]  kthread+0xcd/0x100
> [12240.207720]  ? __pfx_kthread+0x10/0x10
> [12240.207723]  ret_from_fork+0x31/0x50
> [12240.207725]  ? __pfx_kthread+0x10/0x10
> [12240.207728]  ret_from_fork_asm+0x1a/0x30
> [12240.207732]  </TASK>
> [12240.207735] task:kworker/u132:10 state:D stack:0     pid:14188 tgid:14188 ppid:2      flags:0x00004000
> [12240.207738] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207741] Call Trace:
> [12240.207742]  <TASK>
> [12240.207744]  __schedule+0x425/0x1460
> [12240.207749]  schedule+0x27/0xf0
> [12240.207751]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207757]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207760]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207765]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207770]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207772]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207775]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207778]  ? bio_split_rw+0x141/0x2a0
> [12240.207783]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207794]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207797]  __submit_bio+0x190/0x240
> [12240.207801]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207804]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207806]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207809]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207813]  process_one_work+0x18f/0x3b0
> [12240.207816]  worker_thread+0x21f/0x330
> [12240.207817]  ? __pfx_worker_thread+0x10/0x10
> [12240.207820]  ? __pfx_worker_thread+0x10/0x10
> [12240.207822]  kthread+0xcd/0x100
> [12240.207824]  ? __pfx_kthread+0x10/0x10
> [12240.207827]  ret_from_fork+0x31/0x50
> [12240.207829]  ? __pfx_kthread+0x10/0x10
> [12240.207832]  ret_from_fork_asm+0x1a/0x30
> [12240.207836]  </TASK>
> [12240.207838] task:kworker/u132:21 state:D stack:0     pid:15639 tgid:15639 ppid:2      flags:0x00004000
> [12240.207841] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.207843] Call Trace:
> [12240.207844]  <TASK>
> [12240.207846]  __schedule+0x425/0x1460
> [12240.207848]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207853]  schedule+0x27/0xf0
> [12240.207856]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.207862]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.207864]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.207869]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.207874]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207877]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.207879]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.207882]  ? bio_split_rw+0x141/0x2a0
> [12240.207887]  md_handle_request+0x153/0x270 [md_mod]
> [12240.207892]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207895]  __submit_bio+0x190/0x240
> [12240.207899]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.207903]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.207905]  ? submit_bio_noacct+0x47/0x5b0
> [12240.207908]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.207911]  process_one_work+0x18f/0x3b0
> [12240.207914]  worker_thread+0x21f/0x330
> [12240.207916]  ? __pfx_worker_thread+0x10/0x10
> [12240.207918]  ? __pfx_worker_thread+0x10/0x10
> [12240.207920]  kthread+0xcd/0x100
> [12240.207922]  ? __pfx_kthread+0x10/0x10
> [12240.207925]  ret_from_fork+0x31/0x50
> [12240.207927]  ? __pfx_kthread+0x10/0x10
> [12240.207930]  ret_from_fork_asm+0x1a/0x30
> [12240.207934]  </TASK>
> [12240.207942] task:rsync           state:D stack:0     pid:21424 tgid:21424 ppid:21415  flags:0x00000000
> [12240.207945] Call Trace:
> [12240.207946]  <TASK>
> [12240.207948]  __schedule+0x425/0x1460
> [12240.207950]  ? blk_mq_flush_plug_list.part.0+0x4a7/0x5a0
> [12240.207956]  schedule+0x27/0xf0
> [12240.207958]  schedule_timeout+0x15d/0x170
> [12240.207962]  __wait_for_common+0x90/0x1c0
> [12240.207964]  ? __pfx_schedule_timeout+0x10/0x10
> [12240.207968]  xfs_buf_iowait+0x1c/0xc0 [xfs]
> [12240.208035]  __xfs_buf_submit+0x132/0x1e0 [xfs]
> [12240.208086]  xfs_buf_read_map+0x129/0x2a0 [xfs]
> [12240.208136]  ? xfs_da_read_buf+0x106/0x180 [xfs]
> [12240.208207]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
> [12240.208275]  ? xfs_da_read_buf+0x106/0x180 [xfs]
> [12240.208335]  xfs_da_read_buf+0x106/0x180 [xfs]
> [12240.208387]  __xfs_dir3_free_read+0x34/0x1a0 [xfs]
> [12240.208449]  xfs_dir2_node_addname+0x4ba/0xa30 [xfs]
> [12240.208500]  ? xfs_bmap_last_offset+0x98/0x140 [xfs]
> [12240.208565]  xfs_dir_createname+0x129/0x160 [xfs]
> [12240.208621]  ? xfs_trans_ichgtime+0x2f/0x90 [xfs]
> [12240.208688]  xfs_dir_create_child+0x6a/0x150 [xfs]
> [12240.208743]  ? xfs_diflags_to_iflags+0x12/0x50 [xfs]
> [12240.208815]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.208819]  ? xfs_setup_inode+0x52/0x110 [xfs]
> [12240.208869]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.208873]  xfs_create+0x3b3/0x490 [xfs]
> [12240.208929]  xfs_generic_create+0x312/0x370 [xfs]
> [12240.208981]  path_openat+0xf54/0x1210
> [12240.208988]  do_filp_open+0xc4/0x170
> [12240.208994]  do_sys_openat2+0xab/0xe0
> [12240.208999]  __x64_sys_openat+0x57/0xa0
> [12240.209002]  do_syscall_64+0xb7/0x200
> [12240.209006]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> [12240.209010] RIP: 0033:0x7f7c3a92be2f
> [12240.209013] RSP: 002b:00007ffc2513e470 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> [12240.209015] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f7c3a92be2f
> [12240.209017] RDX: 00000000000000c2 RSI: 00007ffc25140740 RDI: 00000000ffffff9c
> [12240.209018] RBP: 000000000003a2f8 R08: 002123761d6309f6 R09: 00007ffc2513e6ac
> [12240.209019] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffc25140789
> [12240.209021] R13: 00007ffc25140740 R14: 8421084210842109 R15: 00007f7c3a9c6a80
> [12240.209025]  </TASK>
> [12240.209027] task:kworker/u132:6  state:D stack:0     pid:23378 tgid:23378 ppid:2      flags:0x00004000
> [12240.209030] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.209034] Call Trace:
> [12240.209035]  <TASK>
> [12240.209036]  __schedule+0x425/0x1460
> [12240.209040]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209042]  ? raid5_bio_lowest_chunk_sector+0x65/0xe0 [raid456]
> [12240.209049]  schedule+0x27/0xf0
> [12240.209052]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.209058]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.209062]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.209067]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.209072]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209075]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.209077]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.209080]  ? bio_split_rw+0x141/0x2a0
> [12240.209085]  md_handle_request+0x153/0x270 [md_mod]
> [12240.209092]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209095]  __submit_bio+0x190/0x240
> [12240.209099]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.209102]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209104]  ? submit_bio_noacct+0x47/0x5b0
> [12240.209108]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.209111]  process_one_work+0x18f/0x3b0
> [12240.209114]  worker_thread+0x21f/0x330
> [12240.209115]  ? __pfx_worker_thread+0x10/0x10
> [12240.209118]  ? __pfx_worker_thread+0x10/0x10
> [12240.209120]  kthread+0xcd/0x100
> [12240.209123]  ? __pfx_kthread+0x10/0x10
> [12240.209125]  ret_from_fork+0x31/0x50
> [12240.209128]  ? __pfx_kthread+0x10/0x10
> [12240.209130]  ret_from_fork_asm+0x1a/0x30
> [12240.209135]  </TASK>
> [12240.209137] task:kworker/u132:29 state:D stack:0     pid:24814 tgid:24814 ppid:2      flags:0x00004000
> [12240.209140] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
> [12240.209143] Call Trace:
> [12240.209144]  <TASK>
> [12240.209146]  __schedule+0x425/0x1460
> [12240.209151]  schedule+0x27/0xf0
> [12240.209154]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
> [12240.209159]  ? __pfx_autoremove_wake_function+0x10/0x10
> [12240.209163]  __add_stripe_bio+0x1f4/0x240 [raid456]
> [12240.209167]  raid5_make_request+0x364/0x1290 [raid456]
> [12240.209172]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209175]  ? submit_bio_noacct_nocheck+0x276/0x3c0
> [12240.209178]  ? __pfx_woken_wake_function+0x10/0x10
> [12240.209180]  ? bio_split_rw+0x141/0x2a0
> [12240.209185]  md_handle_request+0x153/0x270 [md_mod]
> [12240.209191]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209194]  __submit_bio+0x190/0x240
> [12240.209198]  submit_bio_noacct_nocheck+0x19a/0x3c0
> [12240.209201]  ? srso_alias_return_thunk+0x5/0xfbef5
> [12240.209204]  ? submit_bio_noacct+0x47/0x5b0
> [12240.209207]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
> [12240.209210]  process_one_work+0x18f/0x3b0
> [12240.209213]  worker_thread+0x21f/0x330
> [12240.209215]  ? __pfx_worker_thread+0x10/0x10
> [12240.209217]  ? __pfx_worker_thread+0x10/0x10
> [12240.209219]  kthread+0xcd/0x100
> [12240.209222]  ? __pfx_kthread+0x10/0x10
> [12240.209224]  ret_from_fork+0x31/0x50
> [12240.209227]  ? __pfx_kthread+0x10/0x10
> [12240.209229]  ret_from_fork_asm+0x1a/0x30
> [12240.209233]  </TASK>
> 
> 
>> On 4. Nov 2024, at 15:45, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Hi
>>
>>> On 4. Nov 2024, at 13:18, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>
>>>
>>> I think I found a problem by code review, can you test the following
>>> patch? (Noted this is still from latest mainline).
>>
>> Thanks - I can try that. I think I got the gist of it and adapted it to 6.11.6:
>> https://github.com/flyingcircusio/linux-stable/pull/2/files
>>
>> (I’m not properly tooled to apply embedded patches from the mailing lists, I need to improve my toolchain if I have to deep dive this far more often in the future.)
>>
>> Liebe Grüße,
>> Christian Theune
>>
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-05  1:20                                                                                   ` Yu Kuai
@ 2024-11-05  6:23                                                                                     ` Christian Theune
  2024-11-05 10:15                                                                                       ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-05  6:23 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi

> On 5. Nov 2024, at 02:20, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Thanks for the test, and sorry that I just realized my patch doesn't fix
> the problem I noticed. :(
> 
> Please test again with the following patch on the top. I'm confident
> that this is a real problem at least.
> 
> Thanks for the testing!
> Kuai
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 04f32173839a..7ec6a5d2d166 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -4042,7 +4042,7 @@ static void handle_stripe_clean_event(struct r5conf *conf,
>                             test_bit(R5_SkipCopy, &dev->flags))) {
>                                /* We can return any write requests */
>                                struct bio *wbi, *wbi2;
> -                               bool written = false;
> +                               bool written;
> 
>                                pr_debug("Return write for disc %d\n", i);
>                                if (test_and_clear_bit(R5_Discard, &dev->flags))
> @@ -4053,6 +4053,7 @@ static void handle_stripe_clean_event(struct r5conf *conf,
>                                do_endio = true;
> 
> returnbi:
> +                               written = false;
>                                dev->page = dev->orig_page;
>                                wbi = dev->written;
>                                dev->written = NULL;

Heh - I stared at your first patch earlier and had somewhat of a hunch that the looping part would need a reset but I didn’t quite trust my intuition as I don’t know the code well enough … ;)

I’m booting with this additional change now and will report back again later.

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-05  6:23                                                                                     ` Christian Theune
@ 2024-11-05 10:15                                                                                       ` Christian Theune
  2024-11-06  6:35                                                                                         ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-05 10:15 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,


after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)

[13827.157511] sysrq: Show Blocked State
[13827.161742] task:kworker/u130:0  state:D stack:0     pid:212   tgid:212   ppid:2      flags:0x00004000
[13827.161748] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.161758] Call Trace:
[13827.161760]  <TASK>
[13827.161764]  __schedule+0x425/0x1460
[13827.161774]  schedule+0x27/0xf0
[13827.161778]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.161787]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.161793]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.161800]  raid5_make_request+0x364/0x1290 [raid456]
[13827.161806]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.161810]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.161814]  ? __pfx_woken_wake_function+0x10/0x10
[13827.161818]  ? bio_split_rw+0x141/0x2a0
[13827.161823]  md_handle_request+0x153/0x270 [md_mod]
[13827.161830]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.161833]  __submit_bio+0x190/0x240
[13827.161838]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.161841]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.161843]  ? submit_bio_noacct+0x47/0x5b0
[13827.161846]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.161850]  process_one_work+0x18f/0x3b0
[13827.161855]  worker_thread+0x21f/0x330
[13827.161857]  ? __pfx_worker_thread+0x10/0x10
[13827.161859]  kthread+0xcd/0x100
[13827.161863]  ? __pfx_kthread+0x10/0x10
[13827.161866]  ret_from_fork+0x31/0x50
[13827.161870]  ? __pfx_kthread+0x10/0x10
[13827.161873]  ret_from_fork_asm+0x1a/0x30
[13827.161878]  </TASK>
[13827.162001] task:kworker/u130:3  state:D stack:0     pid:1741  tgid:1741  ppid:2      flags:0x00004000
[13827.162006] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162009] Call Trace:
[13827.162010]  <TASK>
[13827.162012]  __schedule+0x425/0x1460
[13827.162017]  schedule+0x27/0xf0
[13827.162019]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162025]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162028]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162033]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162038]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162041]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162043]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162046]  ? bio_split_rw+0x141/0x2a0
[13827.162051]  md_handle_request+0x153/0x270 [md_mod]
[13827.162056]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162060]  __submit_bio+0x190/0x240
[13827.162063]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162066]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162069]  ? submit_bio_noacct+0x47/0x5b0
[13827.162072]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162075]  process_one_work+0x18f/0x3b0
[13827.162078]  worker_thread+0x21f/0x330
[13827.162080]  ? __pfx_worker_thread+0x10/0x10
[13827.162082]  kthread+0xcd/0x100
[13827.162085]  ? __pfx_kthread+0x10/0x10
[13827.162087]  ret_from_fork+0x31/0x50
[13827.162090]  ? __pfx_kthread+0x10/0x10
[13827.162092]  ret_from_fork_asm+0x1a/0x30
[13827.162096]  </TASK>
[13827.162166] task:kworker/u130:4  state:D stack:0     pid:3097  tgid:3097  ppid:2      flags:0x00004000
[13827.162169] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162172] Call Trace:
[13827.162173]  <TASK>
[13827.162175]  __schedule+0x425/0x1460
[13827.162180]  schedule+0x27/0xf0
[13827.162182]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162188]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162190]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162195]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162200]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162203]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162205]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162208]  ? bio_split_rw+0x141/0x2a0
[13827.162213]  md_handle_request+0x153/0x270 [md_mod]
[13827.162218]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162221]  __submit_bio+0x190/0x240
[13827.162226]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162228]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162231]  ? submit_bio_noacct+0x47/0x5b0
[13827.162234]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162237]  process_one_work+0x18f/0x3b0
[13827.162240]  worker_thread+0x21f/0x330
[13827.162243]  ? __pfx_worker_thread+0x10/0x10
[13827.162245]  kthread+0xcd/0x100
[13827.162247]  ? __pfx_kthread+0x10/0x10
[13827.162250]  ret_from_fork+0x31/0x50
[13827.162252]  ? __pfx_kthread+0x10/0x10
[13827.162254]  ret_from_fork_asm+0x1a/0x30
[13827.162259]  </TASK>
[13827.162260] task:kworker/u130:5  state:D stack:0     pid:3098  tgid:3098  ppid:2      flags:0x00004000
[13827.162263] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162266] Call Trace:
[13827.162267]  <TASK>
[13827.162269]  __schedule+0x425/0x1460
[13827.162271]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162277]  schedule+0x27/0xf0
[13827.162279]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162284]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162287]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162292]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162297]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162300]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162302]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162305]  ? bio_split_rw+0x141/0x2a0
[13827.162309]  md_handle_request+0x153/0x270 [md_mod]
[13827.162315]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162318]  __submit_bio+0x190/0x240
[13827.162322]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162326]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162328]  ? submit_bio_noacct+0x47/0x5b0
[13827.162331]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162334]  process_one_work+0x18f/0x3b0
[13827.162337]  worker_thread+0x21f/0x330
[13827.162339]  ? __pfx_worker_thread+0x10/0x10
[13827.162341]  kthread+0xcd/0x100
[13827.162344]  ? __pfx_kthread+0x10/0x10
[13827.162346]  ret_from_fork+0x31/0x50
[13827.162349]  ? __pfx_kthread+0x10/0x10
[13827.162351]  ret_from_fork_asm+0x1a/0x30
[13827.162355]  </TASK>
[13827.162357] task:kworker/u130:6  state:D stack:0     pid:3099  tgid:3099  ppid:2      flags:0x00004000
[13827.162359] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162362] Call Trace:
[13827.162363]  <TASK>
[13827.162365]  __schedule+0x425/0x1460
[13827.162370]  schedule+0x27/0xf0
[13827.162373]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162378]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162381]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162386]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162391]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162394]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162396]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162399]  ? bio_split_rw+0x141/0x2a0
[13827.162403]  md_handle_request+0x153/0x270 [md_mod]
[13827.162409]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162412]  __submit_bio+0x190/0x240
[13827.162416]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162420]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162422]  ? submit_bio_noacct+0x47/0x5b0
[13827.162425]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162428]  process_one_work+0x18f/0x3b0
[13827.162431]  worker_thread+0x21f/0x330
[13827.162433]  ? __pfx_worker_thread+0x10/0x10
[13827.162435]  kthread+0xcd/0x100
[13827.162437]  ? __pfx_kthread+0x10/0x10
[13827.162440]  ret_from_fork+0x31/0x50
[13827.162443]  ? __pfx_kthread+0x10/0x10
[13827.162445]  ret_from_fork_asm+0x1a/0x30
[13827.162449]  </TASK>
[13827.162451] task:kworker/u130:8  state:D stack:0     pid:3101  tgid:3101  ppid:2      flags:0x00004000
[13827.162454] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162457] Call Trace:
[13827.162458]  <TASK>
[13827.162459]  __schedule+0x425/0x1460
[13827.162465]  schedule+0x27/0xf0
[13827.162467]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162472]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162475]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162480]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162485]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162488]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162490]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162493]  ? bio_split_rw+0x141/0x2a0
[13827.162497]  md_handle_request+0x153/0x270 [md_mod]
[13827.162503]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162506]  __submit_bio+0x190/0x240
[13827.162510]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162513]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162516]  ? submit_bio_noacct+0x47/0x5b0
[13827.162519]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162522]  process_one_work+0x18f/0x3b0
[13827.162525]  worker_thread+0x21f/0x330
[13827.162527]  ? __pfx_worker_thread+0x10/0x10
[13827.162529]  ? __pfx_worker_thread+0x10/0x10
[13827.162531]  kthread+0xcd/0x100
[13827.162533]  ? __pfx_kthread+0x10/0x10
[13827.162536]  ret_from_fork+0x31/0x50
[13827.162538]  ? __pfx_kthread+0x10/0x10
[13827.162541]  ret_from_fork_asm+0x1a/0x30
[13827.162545]  </TASK>
[13827.162546] task:kworker/u130:9  state:D stack:0     pid:3102  tgid:3102  ppid:2      flags:0x00004000
[13827.162550] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162552] Call Trace:
[13827.162553]  <TASK>
[13827.162555]  __schedule+0x425/0x1460
[13827.162560]  schedule+0x27/0xf0
[13827.162562]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162568]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162571]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162576]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162581]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162583]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162586]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162589]  ? bio_split_rw+0x141/0x2a0
[13827.162593]  md_handle_request+0x153/0x270 [md_mod]
[13827.162599]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162602]  __submit_bio+0x190/0x240
[13827.162606]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162609]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162612]  ? submit_bio_noacct+0x47/0x5b0
[13827.162615]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162618]  process_one_work+0x18f/0x3b0
[13827.162621]  worker_thread+0x21f/0x330
[13827.162623]  ? __pfx_worker_thread+0x10/0x10
[13827.162625]  kthread+0xcd/0x100
[13827.162628]  ? __pfx_kthread+0x10/0x10
[13827.162630]  ret_from_fork+0x31/0x50
[13827.162632]  ? __pfx_kthread+0x10/0x10
[13827.162635]  ret_from_fork_asm+0x1a/0x30
[13827.162640]  </TASK>
[13827.162641] task:kworker/u130:11 state:D stack:0     pid:3104  tgid:3104  ppid:2      flags:0x00004000
[13827.162643] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162646] Call Trace:
[13827.162648]  <TASK>
[13827.162650]  __schedule+0x425/0x1460
[13827.162655]  schedule+0x27/0xf0
[13827.162657]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162663]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162666]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162671]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162676]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162678]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162681]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162684]  ? bio_split_rw+0x141/0x2a0
[13827.162688]  md_handle_request+0x153/0x270 [md_mod]
[13827.162694]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162698]  __submit_bio+0x190/0x240
[13827.162702]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162704]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162707]  ? submit_bio_noacct+0x47/0x5b0
[13827.162710]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162713]  process_one_work+0x18f/0x3b0
[13827.162716]  worker_thread+0x21f/0x330
[13827.162719]  ? __pfx_worker_thread+0x10/0x10
[13827.162721]  kthread+0xcd/0x100
[13827.162723]  ? __pfx_kthread+0x10/0x10
[13827.162726]  ret_from_fork+0x31/0x50
[13827.162728]  ? __pfx_kthread+0x10/0x10
[13827.162731]  ret_from_fork_asm+0x1a/0x30
[13827.162736]  </TASK>
[13827.162737] task:kworker/u130:12 state:D stack:0     pid:3105  tgid:3105  ppid:2      flags:0x00004000
[13827.162740] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162743] Call Trace:
[13827.162744]  <TASK>
[13827.162746]  __schedule+0x425/0x1460
[13827.162751]  schedule+0x27/0xf0
[13827.162753]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162759]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162762]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162767]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162772]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162774]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162777]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162779]  ? bio_split_rw+0x141/0x2a0
[13827.162785]  md_handle_request+0x153/0x270 [md_mod]
[13827.162790]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162794]  __submit_bio+0x190/0x240
[13827.162798]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162801]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162804]  ? submit_bio_noacct+0x47/0x5b0
[13827.162807]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162809]  process_one_work+0x18f/0x3b0
[13827.162812]  worker_thread+0x21f/0x330
[13827.162815]  ? __pfx_worker_thread+0x10/0x10
[13827.162817]  kthread+0xcd/0x100
[13827.162819]  ? __pfx_kthread+0x10/0x10
[13827.162822]  ret_from_fork+0x31/0x50
[13827.162824]  ? __pfx_kthread+0x10/0x10
[13827.162827]  ret_from_fork_asm+0x1a/0x30
[13827.162831]  </TASK>
[13827.162832] task:kworker/u130:13 state:D stack:0     pid:3106  tgid:3106  ppid:2      flags:0x00004000
[13827.162835] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162838] Call Trace:
[13827.162839]  <TASK>
[13827.162840]  __schedule+0x425/0x1460
[13827.162846]  schedule+0x27/0xf0
[13827.162848]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162854]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162856]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162862]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162867]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162870]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162872]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162875]  ? bio_split_rw+0x141/0x2a0
[13827.162879]  md_handle_request+0x153/0x270 [md_mod]
[13827.162885]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162888]  __submit_bio+0x190/0x240
[13827.162892]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162895]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162902]  ? submit_bio_noacct+0x47/0x5b0
[13827.162905]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.162908]  process_one_work+0x18f/0x3b0
[13827.162911]  worker_thread+0x21f/0x330
[13827.162913]  ? __pfx_worker_thread+0x10/0x10
[13827.162915]  kthread+0xcd/0x100
[13827.162918]  ? __pfx_kthread+0x10/0x10
[13827.162921]  ret_from_fork+0x31/0x50
[13827.162923]  ? __pfx_kthread+0x10/0x10
[13827.162925]  ret_from_fork_asm+0x1a/0x30
[13827.162929]  </TASK>
[13827.162931] task:kworker/u130:14 state:D stack:0     pid:3107  tgid:3107  ppid:2      flags:0x00004000
[13827.162934] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.162936] Call Trace:
[13827.162937]  <TASK>
[13827.162939]  __schedule+0x425/0x1460
[13827.162944]  schedule+0x27/0xf0
[13827.162946]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.162952]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.162955]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.162960]  raid5_make_request+0x364/0x1290 [raid456]
[13827.162965]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162967]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.162970]  ? __pfx_woken_wake_function+0x10/0x10
[13827.162972]  ? bio_split_rw+0x141/0x2a0
[13827.162977]  md_handle_request+0x153/0x270 [md_mod]
[13827.162983]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162986]  __submit_bio+0x190/0x240
[13827.162990]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.162993]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.162995]  ? submit_bio_noacct+0x47/0x5b0
[13827.162998]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163001]  process_one_work+0x18f/0x3b0
[13827.163004]  worker_thread+0x21f/0x330
[13827.163006]  ? __pfx_worker_thread+0x10/0x10
[13827.163008]  kthread+0xcd/0x100
[13827.163011]  ? __pfx_kthread+0x10/0x10
[13827.163014]  ret_from_fork+0x31/0x50
[13827.163016]  ? __pfx_kthread+0x10/0x10
[13827.163018]  ret_from_fork_asm+0x1a/0x30
[13827.163023]  </TASK>
[13827.163024] task:kworker/u130:16 state:D stack:0     pid:3109  tgid:3109  ppid:2      flags:0x00004000
[13827.163027] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163029] Call Trace:
[13827.163031]  <TASK>
[13827.163032]  __schedule+0x425/0x1460
[13827.163037]  schedule+0x27/0xf0
[13827.163040]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163045]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163048]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163053]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163058]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163061]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163065]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163068]  ? bio_split_rw+0x141/0x2a0
[13827.163072]  md_handle_request+0x153/0x270 [md_mod]
[13827.163078]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163081]  __submit_bio+0x190/0x240
[13827.163085]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163088]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163091]  ? submit_bio_noacct+0x47/0x5b0
[13827.163094]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163097]  process_one_work+0x18f/0x3b0
[13827.163100]  worker_thread+0x21f/0x330
[13827.163103]  ? __pfx_worker_thread+0x10/0x10
[13827.163105]  kthread+0xcd/0x100
[13827.163107]  ? __pfx_kthread+0x10/0x10
[13827.163110]  ret_from_fork+0x31/0x50
[13827.163112]  ? __pfx_kthread+0x10/0x10
[13827.163115]  ret_from_fork_asm+0x1a/0x30
[13827.163119]  </TASK>
[13827.163120] task:kworker/u130:17 state:D stack:0     pid:3110  tgid:3110  ppid:2      flags:0x00004000
[13827.163123] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163126] Call Trace:
[13827.163127]  <TASK>
[13827.163128]  __schedule+0x425/0x1460
[13827.163133]  schedule+0x27/0xf0
[13827.163136]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163142]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163144]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163150]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163154]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163157]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163160]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163162]  ? bio_split_rw+0x141/0x2a0
[13827.163167]  md_handle_request+0x153/0x270 [md_mod]
[13827.163173]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163176]  __submit_bio+0x190/0x240
[13827.163180]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163183]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163185]  ? submit_bio_noacct+0x47/0x5b0
[13827.163188]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163192]  process_one_work+0x18f/0x3b0
[13827.163194]  worker_thread+0x21f/0x330
[13827.163197]  ? __pfx_worker_thread+0x10/0x10
[13827.163199]  kthread+0xcd/0x100
[13827.163201]  ? __pfx_kthread+0x10/0x10
[13827.163204]  ret_from_fork+0x31/0x50
[13827.163206]  ? __pfx_kthread+0x10/0x10
[13827.163209]  ret_from_fork_asm+0x1a/0x30
[13827.163213]  </TASK>
[13827.163214] task:kworker/u130:18 state:D stack:0     pid:3111  tgid:3111  ppid:2      flags:0x00004000
[13827.163217] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163219] Call Trace:
[13827.163220]  <TASK>
[13827.163222]  __schedule+0x425/0x1460
[13827.163227]  schedule+0x27/0xf0
[13827.163230]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163235]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163238]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163243]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163248]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163250]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163253]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163255]  ? bio_split_rw+0x141/0x2a0
[13827.163260]  md_handle_request+0x153/0x270 [md_mod]
[13827.163266]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163269]  __submit_bio+0x190/0x240
[13827.163273]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163276]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163279]  ? submit_bio_noacct+0x47/0x5b0
[13827.163282]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163285]  process_one_work+0x18f/0x3b0
[13827.163288]  worker_thread+0x21f/0x330
[13827.163290]  ? __pfx_worker_thread+0x10/0x10
[13827.163292]  kthread+0xcd/0x100
[13827.163294]  ? __pfx_kthread+0x10/0x10
[13827.163297]  ret_from_fork+0x31/0x50
[13827.163300]  ? __pfx_kthread+0x10/0x10
[13827.163302]  ret_from_fork_asm+0x1a/0x30
[13827.163306]  </TASK>
[13827.163308] task:kworker/u130:19 state:D stack:0     pid:3112  tgid:3112  ppid:2      flags:0x00004000
[13827.163311] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163313] Call Trace:
[13827.163314]  <TASK>
[13827.163315]  __schedule+0x425/0x1460
[13827.163321]  schedule+0x27/0xf0
[13827.163323]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163328]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163332]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163336]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163341]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163344]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163347]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163349]  ? bio_split_rw+0x141/0x2a0
[13827.163354]  md_handle_request+0x153/0x270 [md_mod]
[13827.163360]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163363]  __submit_bio+0x190/0x240
[13827.163367]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163370]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163372]  ? submit_bio_noacct+0x47/0x5b0
[13827.163375]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163378]  process_one_work+0x18f/0x3b0
[13827.163381]  worker_thread+0x21f/0x330
[13827.163384]  ? __pfx_worker_thread+0x10/0x10
[13827.163386]  kthread+0xcd/0x100
[13827.163388]  ? __pfx_kthread+0x10/0x10
[13827.163391]  ret_from_fork+0x31/0x50
[13827.163393]  ? __pfx_kthread+0x10/0x10
[13827.163396]  ret_from_fork_asm+0x1a/0x30
[13827.163401]  </TASK>
[13827.163402] task:kworker/u130:20 state:D stack:0     pid:3113  tgid:3113  ppid:2      flags:0x00004000
[13827.163405] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163407] Call Trace:
[13827.163408]  <TASK>
[13827.163410]  __schedule+0x425/0x1460
[13827.163415]  schedule+0x27/0xf0
[13827.163417]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163423]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163426]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163431]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163436]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163438]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163441]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163444]  ? bio_split_rw+0x141/0x2a0
[13827.163448]  md_handle_request+0x153/0x270 [md_mod]
[13827.163454]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163458]  __submit_bio+0x190/0x240
[13827.163461]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163464]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163467]  ? submit_bio_noacct+0x47/0x5b0
[13827.163470]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163473]  process_one_work+0x18f/0x3b0
[13827.163476]  worker_thread+0x21f/0x330
[13827.163478]  ? __pfx_worker_thread+0x10/0x10
[13827.163481]  kthread+0xcd/0x100
[13827.163483]  ? __pfx_kthread+0x10/0x10
[13827.163485]  ret_from_fork+0x31/0x50
[13827.163488]  ? __pfx_kthread+0x10/0x10
[13827.163490]  ret_from_fork_asm+0x1a/0x30
[13827.163495]  </TASK>
[13827.163496] task:kworker/u130:21 state:D stack:0     pid:3114  tgid:3114  ppid:2      flags:0x00004000
[13827.163499] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163502] Call Trace:
[13827.163503]  <TASK>
[13827.163504]  __schedule+0x425/0x1460
[13827.163509]  schedule+0x27/0xf0
[13827.163512]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163518]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163520]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163525]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163530]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163533]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163535]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163538]  ? bio_split_rw+0x141/0x2a0
[13827.163543]  md_handle_request+0x153/0x270 [md_mod]
[13827.163548]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163552]  __submit_bio+0x190/0x240
[13827.163556]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163558]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163561]  ? submit_bio_noacct+0x47/0x5b0
[13827.163564]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163567]  process_one_work+0x18f/0x3b0
[13827.163570]  worker_thread+0x21f/0x330
[13827.163572]  ? __pfx_worker_thread+0x10/0x10
[13827.163574]  kthread+0xcd/0x100
[13827.163577]  ? __pfx_kthread+0x10/0x10
[13827.163579]  ret_from_fork+0x31/0x50
[13827.163582]  ? __pfx_kthread+0x10/0x10
[13827.163584]  ret_from_fork_asm+0x1a/0x30
[13827.163588]  </TASK>
[13827.163590] task:kworker/u130:23 state:D stack:0     pid:3116  tgid:3116  ppid:2      flags:0x00004000
[13827.163593] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163595] Call Trace:
[13827.163596]  <TASK>
[13827.163598]  __schedule+0x425/0x1460
[13827.163603]  schedule+0x27/0xf0
[13827.163605]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163611]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163614]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163618]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163623]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163626]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163629]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163631]  ? bio_split_rw+0x141/0x2a0
[13827.163636]  md_handle_request+0x153/0x270 [md_mod]
[13827.163642]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163645]  __submit_bio+0x190/0x240
[13827.163649]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163652]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163654]  ? submit_bio_noacct+0x47/0x5b0
[13827.163657]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163661]  process_one_work+0x18f/0x3b0
[13827.163663]  worker_thread+0x21f/0x330
[13827.163666]  ? __pfx_worker_thread+0x10/0x10
[13827.163668]  kthread+0xcd/0x100
[13827.163670]  ? __pfx_kthread+0x10/0x10
[13827.163673]  ret_from_fork+0x31/0x50
[13827.163675]  ? __pfx_kthread+0x10/0x10
[13827.163678]  ret_from_fork_asm+0x1a/0x30
[13827.163682]  </TASK>
[13827.163684] task:kworker/u130:25 state:D stack:0     pid:3118  tgid:3118  ppid:2      flags:0x00004000
[13827.163686] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163689] Call Trace:
[13827.163690]  <TASK>
[13827.163692]  __schedule+0x425/0x1460
[13827.163697]  schedule+0x27/0xf0
[13827.163699]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163705]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163708]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163712]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163717]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163720]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163723]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163725]  ? bio_split_rw+0x141/0x2a0
[13827.163730]  md_handle_request+0x153/0x270 [md_mod]
[13827.163736]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163739]  __submit_bio+0x190/0x240
[13827.163743]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163746]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163749]  ? submit_bio_noacct+0x47/0x5b0
[13827.163752]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163755]  process_one_work+0x18f/0x3b0
[13827.163758]  worker_thread+0x21f/0x330
[13827.163760]  ? __pfx_worker_thread+0x10/0x10
[13827.163762]  kthread+0xcd/0x100
[13827.163765]  ? __pfx_kthread+0x10/0x10
[13827.163767]  ret_from_fork+0x31/0x50
[13827.163769]  ? __pfx_kthread+0x10/0x10
[13827.163772]  ret_from_fork_asm+0x1a/0x30
[13827.163776]  </TASK>
[13827.163778] task:kworker/u130:26 state:D stack:0     pid:3119  tgid:3119  ppid:2      flags:0x00004000
[13827.163780] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163783] Call Trace:
[13827.163783]  <TASK>
[13827.163785]  __schedule+0x425/0x1460
[13827.163790]  schedule+0x27/0xf0
[13827.163793]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163798]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163801]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163806]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163811]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163814]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163817]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163819]  ? bio_split_rw+0x141/0x2a0
[13827.163824]  md_handle_request+0x153/0x270 [md_mod]
[13827.163830]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163833]  __submit_bio+0x190/0x240
[13827.163837]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163840]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163842]  ? submit_bio_noacct+0x47/0x5b0
[13827.163845]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163848]  process_one_work+0x18f/0x3b0
[13827.163851]  worker_thread+0x21f/0x330
[13827.163854]  ? __pfx_worker_thread+0x10/0x10
[13827.163856]  kthread+0xcd/0x100
[13827.163858]  ? __pfx_kthread+0x10/0x10
[13827.163861]  ret_from_fork+0x31/0x50
[13827.163863]  ? __pfx_kthread+0x10/0x10
[13827.163866]  ret_from_fork_asm+0x1a/0x30
[13827.163870]  </TASK>
[13827.163872] task:kworker/u130:27 state:D stack:0     pid:3120  tgid:3120  ppid:2      flags:0x00004000
[13827.163874] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163877] Call Trace:
[13827.163878]  <TASK>
[13827.163879]  __schedule+0x425/0x1460
[13827.163885]  schedule+0x27/0xf0
[13827.163887]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163892]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163896]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.163904]  raid5_make_request+0x364/0x1290 [raid456]
[13827.163909]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163912]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.163914]  ? __pfx_woken_wake_function+0x10/0x10
[13827.163917]  ? bio_split_rw+0x141/0x2a0
[13827.163922]  md_handle_request+0x153/0x270 [md_mod]
[13827.163927]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163931]  __submit_bio+0x190/0x240
[13827.163935]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.163938]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.163940]  ? submit_bio_noacct+0x47/0x5b0
[13827.163943]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.163946]  process_one_work+0x18f/0x3b0
[13827.163949]  worker_thread+0x21f/0x330
[13827.163951]  ? __pfx_worker_thread+0x10/0x10
[13827.163953]  kthread+0xcd/0x100
[13827.163956]  ? __pfx_kthread+0x10/0x10
[13827.163959]  ret_from_fork+0x31/0x50
[13827.163961]  ? __pfx_kthread+0x10/0x10
[13827.163964]  ret_from_fork_asm+0x1a/0x30
[13827.163969]  </TASK>
[13827.163970] task:kworker/u130:29 state:D stack:0     pid:3122  tgid:3122  ppid:2      flags:0x00004000
[13827.163973] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.163976] Call Trace:
[13827.163977]  <TASK>
[13827.163979]  __schedule+0x425/0x1460
[13827.163984]  schedule+0x27/0xf0
[13827.163986]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.163992]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.163995]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164000]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164005]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164007]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164010]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164013]  ? bio_split_rw+0x141/0x2a0
[13827.164018]  md_handle_request+0x153/0x270 [md_mod]
[13827.164023]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164027]  __submit_bio+0x190/0x240
[13827.164030]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164033]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164036]  ? submit_bio_noacct+0x47/0x5b0
[13827.164039]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164042]  process_one_work+0x18f/0x3b0
[13827.164045]  worker_thread+0x21f/0x330
[13827.164047]  ? __pfx_worker_thread+0x10/0x10
[13827.164049]  ? __pfx_worker_thread+0x10/0x10
[13827.164051]  kthread+0xcd/0x100
[13827.164054]  ? __pfx_kthread+0x10/0x10
[13827.164057]  ret_from_fork+0x31/0x50
[13827.164059]  ? __pfx_kthread+0x10/0x10
[13827.164062]  ret_from_fork_asm+0x1a/0x30
[13827.164066]  </TASK>
[13827.164067] task:kworker/u130:30 state:D stack:0     pid:3123  tgid:3123  ppid:2      flags:0x00004000
[13827.164070] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164073] Call Trace:
[13827.164074]  <TASK>
[13827.164075]  __schedule+0x425/0x1460
[13827.164080]  schedule+0x27/0xf0
[13827.164083]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164089]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164091]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164096]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164101]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164104]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164106]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164109]  ? bio_split_rw+0x141/0x2a0
[13827.164114]  md_handle_request+0x153/0x270 [md_mod]
[13827.164120]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164123]  __submit_bio+0x190/0x240
[13827.164127]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164130]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164133]  ? submit_bio_noacct+0x47/0x5b0
[13827.164137]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164139]  process_one_work+0x18f/0x3b0
[13827.164142]  worker_thread+0x21f/0x330
[13827.164145]  ? __pfx_worker_thread+0x10/0x10
[13827.164147]  kthread+0xcd/0x100
[13827.164149]  ? __pfx_kthread+0x10/0x10
[13827.164152]  ret_from_fork+0x31/0x50
[13827.164154]  ? __pfx_kthread+0x10/0x10
[13827.164157]  ret_from_fork_asm+0x1a/0x30
[13827.164161]  </TASK>
[13827.164162] task:kworker/u130:31 state:D stack:0     pid:3124  tgid:3124  ppid:2      flags:0x00004000
[13827.164165] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164168] Call Trace:
[13827.164168]  <TASK>
[13827.164170]  __schedule+0x425/0x1460
[13827.164175]  schedule+0x27/0xf0
[13827.164178]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164183]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164186]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164191]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164196]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164198]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164201]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164204]  ? bio_split_rw+0x141/0x2a0
[13827.164208]  md_handle_request+0x153/0x270 [md_mod]
[13827.164214]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164217]  __submit_bio+0x190/0x240
[13827.164221]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164224]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164227]  ? submit_bio_noacct+0x47/0x5b0
[13827.164230]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164233]  process_one_work+0x18f/0x3b0
[13827.164236]  worker_thread+0x21f/0x330
[13827.164239]  ? __pfx_worker_thread+0x10/0x10
[13827.164240]  kthread+0xcd/0x100
[13827.164243]  ? __pfx_kthread+0x10/0x10
[13827.164245]  ret_from_fork+0x31/0x50
[13827.164248]  ? __pfx_kthread+0x10/0x10
[13827.164251]  ret_from_fork_asm+0x1a/0x30
[13827.164255]  </TASK>
[13827.164257] task:kworker/u130:33 state:D stack:0     pid:4077  tgid:4077  ppid:2      flags:0x00004000
[13827.164260] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164262] Call Trace:
[13827.164263]  <TASK>
[13827.164265]  __schedule+0x425/0x1460
[13827.164267]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164272]  schedule+0x27/0xf0
[13827.164274]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164280]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164284]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164288]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164294]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164296]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164299]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164302]  ? bio_split_rw+0x141/0x2a0
[13827.164307]  md_handle_request+0x153/0x270 [md_mod]
[13827.164313]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164316]  __submit_bio+0x190/0x240
[13827.164320]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164323]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164326]  ? submit_bio_noacct+0x47/0x5b0
[13827.164329]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164332]  process_one_work+0x18f/0x3b0
[13827.164334]  worker_thread+0x21f/0x330
[13827.164337]  ? __pfx_worker_thread+0x10/0x10
[13827.164339]  ? __pfx_worker_thread+0x10/0x10
[13827.164341]  kthread+0xcd/0x100
[13827.164343]  ? __pfx_kthread+0x10/0x10
[13827.164346]  ret_from_fork+0x31/0x50
[13827.164349]  ? __pfx_kthread+0x10/0x10
[13827.164351]  ret_from_fork_asm+0x1a/0x30
[13827.164355]  </TASK>
[13827.164358] task:kworker/u130:32 state:D stack:0     pid:10102 tgid:10102 ppid:2      flags:0x00004000
[13827.164361] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164364] Call Trace:
[13827.164365]  <TASK>
[13827.164367]  __schedule+0x425/0x1460
[13827.164372]  schedule+0x27/0xf0
[13827.164374]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164380]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164383]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164388]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164394]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164396]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164399]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164401]  ? bio_split_rw+0x141/0x2a0
[13827.164406]  md_handle_request+0x153/0x270 [md_mod]
[13827.164412]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164415]  __submit_bio+0x190/0x240
[13827.164419]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164422]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164424]  ? submit_bio_noacct+0x47/0x5b0
[13827.164428]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164430]  process_one_work+0x18f/0x3b0
[13827.164433]  worker_thread+0x21f/0x330
[13827.164435]  ? __pfx_worker_thread+0x10/0x10
[13827.164437]  ? __pfx_worker_thread+0x10/0x10
[13827.164440]  kthread+0xcd/0x100
[13827.164442]  ? __pfx_kthread+0x10/0x10
[13827.164444]  ret_from_fork+0x31/0x50
[13827.164447]  ? __pfx_kthread+0x10/0x10
[13827.164449]  ret_from_fork_asm+0x1a/0x30
[13827.164454]  </TASK>
[13827.164455] task:kworker/u132:9  state:D stack:0     pid:10362 tgid:10362 ppid:2      flags:0x00004000
[13827.164459] Workqueue: xfs-cil/dm-4 xlog_cil_push_work [xfs]
[13827.164544] Call Trace:
[13827.164545]  <TASK>
[13827.164547]  __schedule+0x425/0x1460
[13827.164553]  schedule+0x27/0xf0
[13827.164556]  xlog_wait_on_iclog+0x167/0x180 [xfs]
[13827.164611]  ? __pfx_default_wake_function+0x10/0x10
[13827.164615]  xlog_cil_push_work+0x84a/0x880 [xfs]
[13827.164669]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164673]  process_one_work+0x18f/0x3b0
[13827.164676]  worker_thread+0x21f/0x330
[13827.164678]  ? __pfx_worker_thread+0x10/0x10
[13827.164680]  kthread+0xcd/0x100
[13827.164683]  ? __pfx_kthread+0x10/0x10
[13827.164686]  ret_from_fork+0x31/0x50
[13827.164688]  ? __pfx_kthread+0x10/0x10
[13827.164690]  ret_from_fork_asm+0x1a/0x30
[13827.164695]  </TASK>
[13827.164697] task:kworker/u130:15 state:D stack:0     pid:16150 tgid:16150 ppid:2      flags:0x00004000
[13827.164701] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164704] Call Trace:
[13827.164704]  <TASK>
[13827.164706]  __schedule+0x425/0x1460
[13827.164711]  schedule+0x27/0xf0
[13827.164714]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164721]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164724]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164729]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164734]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164737]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164739]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164742]  ? bio_split_rw+0x141/0x2a0
[13827.164747]  md_handle_request+0x153/0x270 [md_mod]
[13827.164753]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164757]  __submit_bio+0x190/0x240
[13827.164761]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164764]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164766]  ? submit_bio_noacct+0x47/0x5b0
[13827.164770]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164773]  process_one_work+0x18f/0x3b0
[13827.164775]  worker_thread+0x21f/0x330
[13827.164778]  ? __pfx_worker_thread+0x10/0x10
[13827.164780]  kthread+0xcd/0x100
[13827.164782]  ? __pfx_kthread+0x10/0x10
[13827.164785]  ret_from_fork+0x31/0x50
[13827.164787]  ? __pfx_kthread+0x10/0x10
[13827.164790]  ret_from_fork_asm+0x1a/0x30
[13827.164794]  </TASK>
[13827.164796] task:kworker/u130:1  state:D stack:0     pid:18969 tgid:18969 ppid:2      flags:0x00004000
[13827.164799] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164801] Call Trace:
[13827.164803]  <TASK>
[13827.164804]  __schedule+0x425/0x1460
[13827.164809]  schedule+0x27/0xf0
[13827.164812]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164818]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164821]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164826]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164831]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164833]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164836]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164839]  ? bio_split_rw+0x141/0x2a0
[13827.164843]  md_handle_request+0x153/0x270 [md_mod]
[13827.164849]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164853]  __submit_bio+0x190/0x240
[13827.164856]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164860]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164862]  ? submit_bio_noacct+0x47/0x5b0
[13827.164865]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164868]  process_one_work+0x18f/0x3b0
[13827.164871]  worker_thread+0x21f/0x330
[13827.164873]  ? __pfx_worker_thread+0x10/0x10
[13827.164875]  ? __pfx_worker_thread+0x10/0x10
[13827.164877]  kthread+0xcd/0x100
[13827.164880]  ? __pfx_kthread+0x10/0x10
[13827.164882]  ret_from_fork+0x31/0x50
[13827.164885]  ? __pfx_kthread+0x10/0x10
[13827.164887]  ret_from_fork_asm+0x1a/0x30
[13827.164891]  </TASK>
[13827.164901] task:kworker/u130:28 state:D stack:0     pid:25349 tgid:25349 ppid:2      flags:0x00004000
[13827.164904] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.164907] Call Trace:
[13827.164908]  <TASK>
[13827.164909]  __schedule+0x425/0x1460
[13827.164914]  schedule+0x27/0xf0
[13827.164917]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.164922]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.164925]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.164930]  raid5_make_request+0x364/0x1290 [raid456]
[13827.164935]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164938]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.164941]  ? __pfx_woken_wake_function+0x10/0x10
[13827.164943]  ? bio_split_rw+0x141/0x2a0
[13827.164948]  md_handle_request+0x153/0x270 [md_mod]
[13827.164954]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164957]  __submit_bio+0x190/0x240
[13827.164961]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.164964]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.164967]  ? submit_bio_noacct+0x47/0x5b0
[13827.164970]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.164973]  process_one_work+0x18f/0x3b0
[13827.164976]  worker_thread+0x21f/0x330
[13827.164978]  ? __pfx_worker_thread+0x10/0x10
[13827.164980]  kthread+0xcd/0x100
[13827.164982]  ? __pfx_kthread+0x10/0x10
[13827.164985]  ret_from_fork+0x31/0x50
[13827.164988]  ? __pfx_kthread+0x10/0x10
[13827.164990]  ret_from_fork_asm+0x1a/0x30
[13827.164994]  </TASK>
[13827.165003] task:rsync           state:D stack:0     pid:26235 tgid:26235 ppid:26234  flags:0x00000000
[13827.165006] Call Trace:
[13827.165007]  <TASK>
[13827.165009]  __schedule+0x425/0x1460
[13827.165014]  ? nvme_queue_rqs+0x149/0x1f0 [nvme]
[13827.165019]  schedule+0x27/0xf0
[13827.165022]  schedule_timeout+0x15d/0x170
[13827.165026]  __down_common+0x119/0x220
[13827.165028]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165032]  down+0x47/0x60
[13827.165034]  xfs_buf_lock+0x31/0xe0 [xfs]
[13827.165110]  xfs_buf_find_lock+0x55/0x100 [xfs]
[13827.165163]  xfs_buf_get_map+0x1ea/0xa80 [xfs]
[13827.165215]  ? __entry_text_end+0x101e87/0x101e89
[13827.165220]  xfs_buf_read_map+0x62/0x2a0 [xfs]
[13827.165272]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[13827.165344]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[13827.165414]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[13827.165476]  xfs_da_read_buf+0x106/0x180 [xfs]
[13827.165528]  xfs_dir3_block_read+0x3c/0xf0 [xfs]
[13827.165588]  xfs_dir2_block_getdents+0xa8/0x2a0 [xfs]
[13827.165656]  xfs_readdir+0x1bf/0x200 [xfs]
[13827.165709]  iterate_dir+0x121/0x220
[13827.165715]  __x64_sys_getdents64+0x88/0x130
[13827.165717]  ? __pfx_filldir64+0x10/0x10
[13827.165721]  do_syscall_64+0xb7/0x200
[13827.165726]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[13827.165729] RIP: 0033:0x7f9a23704ea7
[13827.165731] RSP: 002b:00007ffef3ab7a58 EFLAGS: 00000293 ORIG_RAX: 00000000000000d9
[13827.165734] RAX: ffffffffffffffda RBX: 000000001649b740 RCX: 00007f9a23704ea7
[13827.165735] RDX: 0000000000008000 RSI: 000000001649b770 RDI: 0000000000000005
[13827.165737] RBP: 000000001649b770 R08: 0000000000000030 R09: 0000000000000001
[13827.165738] R10: 0000000000000000 R11: 0000000000000293 R12: ffffffffffffff88
[13827.165740] R13: 000000001649b744 R14: 0000000000000000 R15: 0000000000000001
[13827.165744]  </TASK>
[13827.165749] task:kworker/u130:34 state:D stack:0     pid:29724 tgid:29724 ppid:2      flags:0x00004000
[13827.165752] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.165756] Call Trace:
[13827.165757]  <TASK>
[13827.165759]  __schedule+0x425/0x1460
[13827.165765]  schedule+0x27/0xf0
[13827.165767]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.165774]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.165777]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.165782]  raid5_make_request+0x364/0x1290 [raid456]
[13827.165788]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165791]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.165794]  ? __pfx_woken_wake_function+0x10/0x10
[13827.165797]  ? bio_split_rw+0x141/0x2a0
[13827.165801]  md_handle_request+0x153/0x270 [md_mod]
[13827.165808]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165811]  __submit_bio+0x190/0x240
[13827.165816]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.165819]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165821]  ? submit_bio_noacct+0x47/0x5b0
[13827.165824]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.165827]  process_one_work+0x18f/0x3b0
[13827.165831]  worker_thread+0x21f/0x330
[13827.165832]  ? __pfx_worker_thread+0x10/0x10
[13827.165835]  ? __pfx_worker_thread+0x10/0x10
[13827.165837]  kthread+0xcd/0x100
[13827.165840]  ? __pfx_kthread+0x10/0x10
[13827.165842]  ret_from_fork+0x31/0x50
[13827.165845]  ? __pfx_kthread+0x10/0x10
[13827.165847]  ret_from_fork_asm+0x1a/0x30
[13827.165852]  </TASK>
[13827.165854] task:kworker/31:2    state:D stack:0     pid:29965 tgid:29965 ppid:2      flags:0x00004000
[13827.165858] Workqueue: xfs-sync/dm-4 xfs_log_worker [xfs]
[13827.165928] Call Trace:
[13827.165930]  <TASK>
[13827.165932]  __schedule+0x425/0x1460
[13827.165934]  ? ttwu_do_activate+0x64/0x210
[13827.165937]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165942]  schedule+0x27/0xf0
[13827.165944]  schedule_timeout+0x15d/0x170
[13827.165948]  __wait_for_common+0x90/0x1c0
[13827.165951]  ? __pfx_schedule_timeout+0x10/0x10
[13827.165955]  __flush_workqueue+0x158/0x440
[13827.165957]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.165962]  xlog_cil_push_now.isra.0+0x5e/0xa0 [xfs]
[13827.166014]  xlog_cil_force_seq+0x69/0x240 [xfs]
[13827.166066]  xfs_log_force+0x7a/0x230 [xfs]
[13827.166116]  xfs_log_worker+0x3d/0xc0 [xfs]
[13827.166165]  process_one_work+0x18f/0x3b0
[13827.166168]  worker_thread+0x21f/0x330
[13827.166170]  ? __pfx_worker_thread+0x10/0x10
[13827.166173]  ? __pfx_worker_thread+0x10/0x10
[13827.166175]  kthread+0xcd/0x100
[13827.166177]  ? __pfx_kthread+0x10/0x10
[13827.166180]  ret_from_fork+0x31/0x50
[13827.166183]  ? __pfx_kthread+0x10/0x10
[13827.166185]  ret_from_fork_asm+0x1a/0x30
[13827.166190]  </TASK>
[13827.166194] task:rsync           state:D stack:0     pid:30636 tgid:30636 ppid:30606  flags:0x00000000
[13827.166197] Call Trace:
[13827.166198]  <TASK>
[13827.166199]  __schedule+0x425/0x1460
[13827.166202]  ? blk_mq_flush_plug_list.part.0+0x4a7/0x5a0
[13827.166208]  schedule+0x27/0xf0
[13827.166210]  schedule_timeout+0x15d/0x170
[13827.166214]  __wait_for_common+0x90/0x1c0
[13827.166216]  ? __pfx_schedule_timeout+0x10/0x10
[13827.166220]  xfs_buf_iowait+0x1c/0xc0 [xfs]
[13827.166282]  __xfs_buf_submit+0x132/0x1e0 [xfs]
[13827.166333]  xfs_buf_read_map+0x129/0x2a0 [xfs]
[13827.166383]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[13827.166451]  xfs_trans_read_buf_map+0x12e/0x310 [xfs]
[13827.166517]  ? xfs_da_read_buf+0x106/0x180 [xfs]
[13827.166575]  xfs_da_read_buf+0x106/0x180 [xfs]
[13827.166627]  __xfs_dir3_free_read+0x34/0x1a0 [xfs]
[13827.166688]  xfs_dir2_node_addname+0x4ba/0xa30 [xfs]
[13827.166739]  ? xfs_bmap_last_offset+0x98/0x140 [xfs]
[13827.166805]  xfs_dir_createname+0x129/0x160 [xfs]
[13827.166860]  ? xfs_trans_ichgtime+0x2f/0x90 [xfs]
[13827.166935]  xfs_dir_create_child+0x6a/0x150 [xfs]
[13827.166989]  ? xfs_diflags_to_iflags+0x12/0x50 [xfs]
[13827.167057]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167060]  ? xfs_setup_inode+0x52/0x110 [xfs]
[13827.167109]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167113]  xfs_create+0x3b3/0x490 [xfs]
[13827.167170]  xfs_generic_create+0x312/0x370 [xfs]
[13827.167222]  path_openat+0xf54/0x1210
[13827.167227]  do_filp_open+0xc4/0x170
[13827.167234]  do_sys_openat2+0xab/0xe0
[13827.167239]  __x64_sys_openat+0x57/0xa0
[13827.167243]  do_syscall_64+0xb7/0x200
[13827.167246]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[13827.167248] RIP: 0033:0x7f44f132be2f
[13827.167250] RSP: 002b:00007ffffdc01f90 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[13827.167253] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f44f132be2f
[13827.167254] RDX: 00000000000000c2 RSI: 00007ffffdc04260 RDI: 00000000ffffff9c
[13827.167255] RBP: 000000000003a2f8 R08: 001fa65a6d219dd4 R09: 00007ffffdc021cc
[13827.167256] R10: 0000000000000180 R11: 0000000000000246 R12: 00007ffffdc042a9
[13827.167257] R13: 00007ffffdc04260 R14: 8421084210842109 R15: 00007f44f13c6a80
[13827.167261]  </TASK>
[13827.167265] task:kworker/u130:22 state:D stack:0     pid:32348 tgid:32348 ppid:2      flags:0x00004000
[13827.167268] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.167271] Call Trace:
[13827.167272]  <TASK>
[13827.167275]  __schedule+0x425/0x1460
[13827.167280]  schedule+0x27/0xf0
[13827.167283]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.167290]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.167293]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.167298]  raid5_make_request+0x364/0x1290 [raid456]
[13827.167303]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167306]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.167309]  ? __pfx_woken_wake_function+0x10/0x10
[13827.167312]  ? bio_split_rw+0x141/0x2a0
[13827.167317]  md_handle_request+0x153/0x270 [md_mod]
[13827.167323]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167327]  __submit_bio+0x190/0x240
[13827.167331]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.167334]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167336]  ? submit_bio_noacct+0x47/0x5b0
[13827.167340]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.167343]  process_one_work+0x18f/0x3b0
[13827.167346]  worker_thread+0x21f/0x330
[13827.167348]  ? __pfx_worker_thread+0x10/0x10
[13827.167350]  ? __pfx_worker_thread+0x10/0x10
[13827.167353]  kthread+0xcd/0x100
[13827.167355]  ? __pfx_kthread+0x10/0x10
[13827.167358]  ret_from_fork+0x31/0x50
[13827.167360]  ? __pfx_kthread+0x10/0x10
[13827.167363]  ret_from_fork_asm+0x1a/0x30
[13827.167368]  </TASK>
[13827.167370] task:kworker/u130:35 state:D stack:0     pid:34332 tgid:34332 ppid:2      flags:0x00004000
[13827.167373] Workqueue: kcryptd-254:4-1 kcryptd_crypt [dm_crypt]
[13827.167376] Call Trace:
[13827.167377]  <TASK>
[13827.167379]  __schedule+0x425/0x1460
[13827.167384]  schedule+0x27/0xf0
[13827.167387]  md_bitmap_startwrite+0x14f/0x1c0 [md_mod]
[13827.167392]  ? __pfx_autoremove_wake_function+0x10/0x10
[13827.167395]  __add_stripe_bio+0x1f4/0x240 [raid456]
[13827.167400]  raid5_make_request+0x364/0x1290 [raid456]
[13827.167405]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167408]  ? submit_bio_noacct_nocheck+0x276/0x3c0
[13827.167411]  ? __pfx_woken_wake_function+0x10/0x10
[13827.167413]  ? bio_split_rw+0x141/0x2a0
[13827.167418]  md_handle_request+0x153/0x270 [md_mod]
[13827.167424]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167427]  __submit_bio+0x190/0x240
[13827.167431]  submit_bio_noacct_nocheck+0x19a/0x3c0
[13827.167434]  ? srso_alias_return_thunk+0x5/0xfbef5
[13827.167437]  ? submit_bio_noacct+0x47/0x5b0
[13827.167440]  kcryptd_crypt_write_convert+0x118/0x1e0 [dm_crypt]
[13827.167443]  process_one_work+0x18f/0x3b0
[13827.167446]  worker_thread+0x21f/0x330
[13827.167448]  ? __pfx_worker_thread+0x10/0x10
[13827.167450]  ? __pfx_worker_thread+0x10/0x10
[13827.167452]  kthread+0xcd/0x100
[13827.167454]  ? __pfx_kthread+0x10/0x10
[13827.167457]  ret_from_fork+0x31/0x50
[13827.167459]  ? __pfx_kthread+0x10/0x10
[13827.167462]  ret_from_fork_asm+0x1a/0x30
[13827.167466]  </TASK>

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-05 10:15                                                                                       ` Christian Theune
@ 2024-11-06  6:35                                                                                         ` Yu Kuai
  2024-11-06  6:40                                                                                           ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-11-06  6:35 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

在 2024/11/05 18:15, Christian Theune 写道:
> Hi,
> 
> 
> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)

This is bad news :(

While reviewing related code, I come up with a plan to move bitmap
start/end write ops to the upper layer. Make sure each write IO from
upper layer only start once and end once, this is easy to make sure
they are balanced and can avoid many calls to improve performance as
well.

However, I need a few days to cooke a patch after work.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-06  6:35                                                                                         ` Yu Kuai
@ 2024-11-06  6:40                                                                                           ` Christian Theune
  2024-11-07  7:55                                                                                             ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-06  6:40 UTC (permalink / raw)
  To: Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yukuai (C)

Hi,

> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/11/05 18:15, Christian Theune 写道:
>> Hi,
>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
> 
> This is bad news :(

Yeah. But: the good new is that we aren’t eating any data so far … ;)

> While reviewing related code, I come up with a plan to move bitmap
> start/end write ops to the upper layer. Make sure each write IO from
> upper layer only start once and end once, this is easy to make sure
> they are balanced and can avoid many calls to improve performance as
> well.

Sounds like a plan!

> However, I need a few days to cooke a patch after work.

Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)

Thanks a lot for your help!
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-06  6:40                                                                                           ` Christian Theune
@ 2024-11-07  7:55                                                                                             ` Yu Kuai
  2024-11-07  8:01                                                                                               ` Yu Kuai
  2024-11-09 11:35                                                                                               ` Xiao Ni
  0 siblings, 2 replies; 88+ messages in thread
From: Yu Kuai @ 2024-11-07  7:55 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C)

Hi!

在 2024/11/06 14:40, Christian Theune 写道:
> Hi,
> 
>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2024/11/05 18:15, Christian Theune 写道:
>>> Hi,
>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
>>
>> This is bad news :(
> 
> Yeah. But: the good new is that we aren’t eating any data so far … ;)
> 
>> While reviewing related code, I come up with a plan to move bitmap
>> start/end write ops to the upper layer. Make sure each write IO from
>> upper layer only start once and end once, this is easy to make sure
>> they are balanced and can avoid many calls to improve performance as
>> well.
> 
> Sounds like a plan!
> 
>> However, I need a few days to cooke a patch after work.
> 
> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)

I wrote a simple and crude version, please give it a test again.

Thanks,
Kuai

diff --git a/drivers/md/md.c b/drivers/md/md.c
index d3a837506a36..5e1a82b79e41 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev, 
struct md_rdev *rdev,
  }
  EXPORT_SYMBOL_GPL(md_submit_discard_bio);

+static bool is_raid456(struct mddev *mddev)
+{
+       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
+              mddev->pers->level == 6;
+}
+
+static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
+{
+       if (!is_raid456(mddev) || !mddev->bitmap)
+               return;
+
+       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio), 
bio_sectors(bio),
+                            0);
+}
+
+static void bitmap_endwrite(struct mddev *mddev, struct bio *bio, 
sector_t sectors)
+{
+       if (!is_raid456(mddev) || !mddev->bitmap)
+               return;
+
+       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
+                          bio->bi_status == BLK_STS_OK, 0);
+}
+
  static void md_end_clone_io(struct bio *bio)
  {
         struct md_io_clone *md_io_clone = bio->bi_private;
@@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
         if (md_io_clone->start_time)
                 bio_end_io_acct(orig_bio, md_io_clone->start_time);

+       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
         bio_put(bio);
         bio_endio(orig_bio);
         percpu_ref_put(&mddev->active_io);
@@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev, 
struct bio **bio)
                 bio_alloc_clone(bdev, *bio, GFP_NOIO, 
&mddev->io_clone_set);

         md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
+       md_io_clone->sectors = bio_sectors(*bio);
         md_io_clone->orig_bio = *bio;
         md_io_clone->mddev = mddev;
         if (blk_queue_io_stat(bdev->bd_disk->queue))
@@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev, 
struct bio **bio)

  void md_account_bio(struct mddev *mddev, struct bio **bio)
  {
+       bitmap_startwrite(mddev, *bio);
         percpu_ref_get(&mddev->active_io);
         md_clone_bio(mddev, bio);
  }
@@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
         if (md_io_clone->start_time)
                 bio_end_io_acct(orig_bio, md_io_clone->start_time);

+       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
+
         bio_put(bio);
         percpu_ref_put(&mddev->active_io);
  }
diff --git a/drivers/md/md.h b/drivers/md/md.h
index a0d6827dced9..0c2794230e0a 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -837,6 +837,7 @@ struct md_io_clone {
         struct mddev    *mddev;
         struct bio      *orig_bio;
         unsigned long   start_time;
+       sector_t        sectors;
         struct bio      bio_clone;
  };
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index c14cf2410365..4f009e32f68a 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head 
*sh, struct bio *bi,
                  * is added to a batch, STRIPE_BIT_DELAY cannot be changed
                  * any more.
                  */
-               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
-               spin_unlock_irq(&sh->stripe_lock);
-               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
-                                    RAID5_STRIPE_SECTORS(conf), 0);
-               spin_lock_irq(&sh->stripe_lock);
-               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
                 if (!sh->batch_head) {
                         sh->bm_seq = conf->seq_flush+1;
                         set_bit(STRIPE_BIT_DELAY, &sh->state);
@@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
         BUG_ON(sh->batch_head);
         for (i = disks; i--; ) {
                 struct bio *bi;
-               int bitmap_end = 0;

                 if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
                         struct md_rdev *rdev = conf->disks[i].rdev;
@@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                 sh->dev[i].towrite = NULL;
                 sh->overwrite_disks = 0;
                 spin_unlock_irq(&sh->stripe_lock);
-               if (bi)
-                       bitmap_end = 1;

                 log_stripe_write_finished(sh);
@@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                         bio_io_error(bi);
                         bi = nextbi;
                 }
-               if (bitmap_end)
-                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
-                                          RAID5_STRIPE_SECTORS(conf), 
0, 0);
-               bitmap_end = 0;
                 /* and fail all 'written' */
                 bi = sh->dev[i].written;
                 sh->dev[i].written = NULL;
@@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                         sh->dev[i].page = sh->dev[i].orig_page;
                 }

-               if (bi) bitmap_end = 1;
                 while (bi && bi->bi_iter.bi_sector <
                        sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
                         struct bio *bi2 = r5_next_bio(conf, bi, 
sh->dev[i].sector);
@@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct 
stripe_head *sh,
                                 bi = nextbi;
                         }
                 }
-               if (bitmap_end)
-                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
-                                          RAID5_STRIPE_SECTORS(conf), 
0, 0);
                 /* If we were in the middle of a write the parity block 
might
                  * still be locked - so just clear all R5_LOCKED flags
                  */
@@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct 
r5conf *conf,
                                         bio_endio(wbi);
                                         wbi = wbi2;
                                 }
-                               md_bitmap_endwrite(conf->mddev->bitmap, 
sh->sector,
- 
RAID5_STRIPE_SECTORS(conf),
- 
!test_bit(STRIPE_DEGRADED, &sh->state),
-                                                  0);
                                 if (head_sh->batch_head) {
                                         sh = 
list_first_entry(&sh->batch_list,
                                                               struct 
stripe_head,
@@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev 
*mddev, struct bio *bi)
                 }
                 spin_unlock_irq(&sh->stripe_lock);
                 if (conf->mddev->bitmap) {
-                       for (d = 0;
-                            d < conf->raid_disks - conf->max_degraded;
-                            d++)
-                               md_bitmap_startwrite(mddev->bitmap,
-                                                    sh->sector,
- 
RAID5_STRIPE_SECTORS(conf),
-                                                    0);
                         sh->bm_seq = conf->seq_flush + 1;
                         set_bit(STRIPE_BIT_DELAY, &sh->state);
                 }



> 
> Thanks a lot for your help!
> Christian
> 


^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-07  7:55                                                                                             ` Yu Kuai
@ 2024-11-07  8:01                                                                                               ` Yu Kuai
  2024-11-09 11:35                                                                                               ` Xiao Ni
  1 sibling, 0 replies; 88+ messages in thread
From: Yu Kuai @ 2024-11-07  8:01 UTC (permalink / raw)
  To: Yu Kuai, Christian Theune
  Cc: John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C)

Hi,

在 2024/11/07 15:55, Yu Kuai 写道:
> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o

I have no idea why there is a 'o' at the end :(, please remove it while
applying the diff, I double checked and others are good.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-07  7:55                                                                                             ` Yu Kuai
  2024-11-07  8:01                                                                                               ` Yu Kuai
@ 2024-11-09 11:35                                                                                               ` Xiao Ni
  2024-11-11  2:25                                                                                                 ` Yu Kuai
  2024-11-11  8:00                                                                                                 ` Christian Theune
  1 sibling, 2 replies; 88+ messages in thread
From: Xiao Ni @ 2024-11-09 11:35 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Christian Theune, John Stoffel, linux-raid@vger.kernel.org,
	dm-devel, Dragan Milivojević, yangerkun@huawei.com,
	yukuai (C), David Jeffery

[-- Attachment #1: Type: text/plain, Size: 9702 bytes --]

On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi!
>
> 在 2024/11/06 14:40, Christian Theune 写道:
> > Hi,
> >
> >> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>
> >> Hi,
> >>
> >> 在 2024/11/05 18:15, Christian Theune 写道:
> >>> Hi,
> >>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
> >>
> >> This is bad news :(
> >
> > Yeah. But: the good new is that we aren’t eating any data so far … ;)
> >
> >> While reviewing related code, I come up with a plan to move bitmap
> >> start/end write ops to the upper layer. Make sure each write IO from
> >> upper layer only start once and end once, this is easy to make sure
> >> they are balanced and can avoid many calls to improve performance as
> >> well.
> >
> > Sounds like a plan!
> >
> >> However, I need a few days to cooke a patch after work.
> >
> > Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
>
> I wrote a simple and crude version, please give it a test again.
>
> Thanks,
> Kuai
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index d3a837506a36..5e1a82b79e41 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
> struct md_rdev *rdev,
>   }
>   EXPORT_SYMBOL_GPL(md_submit_discard_bio);
>
> +static bool is_raid456(struct mddev *mddev)
> +{
> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
> +              mddev->pers->level == 6;
> +}
> +
> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
> +{
> +       if (!is_raid456(mddev) || !mddev->bitmap)
> +               return;
> +
> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
> bio_sectors(bio),
> +                            0);
> +}
> +
> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
> sector_t sectors)
> +{
> +       if (!is_raid456(mddev) || !mddev->bitmap)
> +               return;
> +
> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
> +                          bio->bi_status == BLK_STS_OK, 0);
> +}
> +
>   static void md_end_clone_io(struct bio *bio)
>   {
>          struct md_io_clone *md_io_clone = bio->bi_private;
> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
>          if (md_io_clone->start_time)
>                  bio_end_io_acct(orig_bio, md_io_clone->start_time);
>
> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>          bio_put(bio);
>          bio_endio(orig_bio);
>          percpu_ref_put(&mddev->active_io);
> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
> struct bio **bio)
>                  bio_alloc_clone(bdev, *bio, GFP_NOIO,
> &mddev->io_clone_set);
>
>          md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
> +       md_io_clone->sectors = bio_sectors(*bio);
>          md_io_clone->orig_bio = *bio;
>          md_io_clone->mddev = mddev;
>          if (blk_queue_io_stat(bdev->bd_disk->queue))
> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
> struct bio **bio)
>
>   void md_account_bio(struct mddev *mddev, struct bio **bio)
>   {
> +       bitmap_startwrite(mddev, *bio);
>          percpu_ref_get(&mddev->active_io);
>          md_clone_bio(mddev, bio);
>   }
> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
>          if (md_io_clone->start_time)
>                  bio_end_io_acct(orig_bio, md_io_clone->start_time);
>
> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
> +
>          bio_put(bio);
>          percpu_ref_put(&mddev->active_io);
>   }
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index a0d6827dced9..0c2794230e0a 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -837,6 +837,7 @@ struct md_io_clone {
>          struct mddev    *mddev;
>          struct bio      *orig_bio;
>          unsigned long   start_time;
> +       sector_t        sectors;
>          struct bio      bio_clone;
>   };
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index c14cf2410365..4f009e32f68a 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
> *sh, struct bio *bi,
>                   * is added to a batch, STRIPE_BIT_DELAY cannot be changed
>                   * any more.
>                   */
> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
> -               spin_unlock_irq(&sh->stripe_lock);
> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
> -                                    RAID5_STRIPE_SECTORS(conf), 0);
> -               spin_lock_irq(&sh->stripe_lock);
> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>                  if (!sh->batch_head) {
>                          sh->bm_seq = conf->seq_flush+1;
>                          set_bit(STRIPE_BIT_DELAY, &sh->state);
> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> stripe_head *sh,
>          BUG_ON(sh->batch_head);
>          for (i = disks; i--; ) {
>                  struct bio *bi;
> -               int bitmap_end = 0;
>
>                  if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
>                          struct md_rdev *rdev = conf->disks[i].rdev;
> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> stripe_head *sh,
>                  sh->dev[i].towrite = NULL;
>                  sh->overwrite_disks = 0;
>                  spin_unlock_irq(&sh->stripe_lock);
> -               if (bi)
> -                       bitmap_end = 1;
>
>                  log_stripe_write_finished(sh);
> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> stripe_head *sh,
>                          bio_io_error(bi);
>                          bi = nextbi;
>                  }
> -               if (bitmap_end)
> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
> -                                          RAID5_STRIPE_SECTORS(conf),
> 0, 0);
> -               bitmap_end = 0;
>                  /* and fail all 'written' */
>                  bi = sh->dev[i].written;
>                  sh->dev[i].written = NULL;
> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> stripe_head *sh,
>                          sh->dev[i].page = sh->dev[i].orig_page;
>                  }
>
> -               if (bi) bitmap_end = 1;
>                  while (bi && bi->bi_iter.bi_sector <
>                         sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
>                          struct bio *bi2 = r5_next_bio(conf, bi,
> sh->dev[i].sector);
> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> stripe_head *sh,
>                                  bi = nextbi;
>                          }
>                  }
> -               if (bitmap_end)
> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
> -                                          RAID5_STRIPE_SECTORS(conf),
> 0, 0);
>                  /* If we were in the middle of a write the parity block
> might
>                   * still be locked - so just clear all R5_LOCKED flags
>                   */
> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
> r5conf *conf,
>                                          bio_endio(wbi);
>                                          wbi = wbi2;
>                                  }
> -                               md_bitmap_endwrite(conf->mddev->bitmap,
> sh->sector,
> -
> RAID5_STRIPE_SECTORS(conf),
> -
> !test_bit(STRIPE_DEGRADED, &sh->state),
> -                                                  0);
>                                  if (head_sh->batch_head) {
>                                          sh =
> list_first_entry(&sh->batch_list,
>                                                                struct
> stripe_head,
> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
> *mddev, struct bio *bi)
>                  }
>                  spin_unlock_irq(&sh->stripe_lock);
>                  if (conf->mddev->bitmap) {
> -                       for (d = 0;
> -                            d < conf->raid_disks - conf->max_degraded;
> -                            d++)
> -                               md_bitmap_startwrite(mddev->bitmap,
> -                                                    sh->sector,
> -
> RAID5_STRIPE_SECTORS(conf),
> -                                                    0);
>                          sh->bm_seq = conf->seq_flush + 1;
>                          set_bit(STRIPE_BIT_DELAY, &sh->state);
>                  }
>
>
>
> >
> > Thanks a lot for your help!
> > Christian
> >
>
>

Hi Kuai

Maybe it's not good to put the bitmap operation from raid5 to md which
the new api is only used for raid5. And the bitmap region which raid5
needs to handle is based on the member disk. It should be calculated
rather than the bio address space. Because the bio address space is
for the whole array.

We have a customer who reports a similar problem. There is a patch
from David. I put it in the attachment.

@Christian, can you have a try with the patch? It can be applied
cleanly on 6.11-rc6

Regards
Xiao

[-- Attachment #2: md_raid5_one_bitmap_claim_per_stripe_head.patch --]
[-- Type: application/octet-stream, Size: 4330 bytes --]

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index c14cf2410365..6e318598a7b6 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -3548,7 +3548,7 @@ static void __add_stripe_bio(struct stripe_head *sh, struct bio *bi,
 		 (*bip)->bi_iter.bi_sector, sh->sector, dd_idx,
 		 sh->dev[dd_idx].sector);
 
-	if (conf->mddev->bitmap && firstwrite) {
+	if (conf->mddev->bitmap && firstwrite && !test_and_set_bit(STRIPE_BITMAP_CLAIM, &sh->state)) {
 		/* Cannot hold spinlock over bitmap_startwrite,
 		 * but must ensure this isn't added to a batch until
 		 * we have added to the bitmap and set bm_seq.
@@ -3621,7 +3621,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
 	BUG_ON(sh->batch_head);
 	for (i = disks; i--; ) {
 		struct bio *bi;
-		int bitmap_end = 0;
 
 		if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
 			struct md_rdev *rdev = conf->disks[i].rdev;
@@ -3646,8 +3645,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
 		sh->dev[i].towrite = NULL;
 		sh->overwrite_disks = 0;
 		spin_unlock_irq(&sh->stripe_lock);
-		if (bi)
-			bitmap_end = 1;
 
 		log_stripe_write_finished(sh);
 
@@ -3662,10 +3659,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
 			bio_io_error(bi);
 			bi = nextbi;
 		}
-		if (bitmap_end)
-			md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
-					   RAID5_STRIPE_SECTORS(conf), 0, 0);
-		bitmap_end = 0;
 		/* and fail all 'written' */
 		bi = sh->dev[i].written;
 		sh->dev[i].written = NULL;
@@ -3674,7 +3667,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
 			sh->dev[i].page = sh->dev[i].orig_page;
 		}
 
-		if (bi) bitmap_end = 1;
 		while (bi && bi->bi_iter.bi_sector <
 		       sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
 			struct bio *bi2 = r5_next_bio(conf, bi, sh->dev[i].sector);
@@ -3708,14 +3700,15 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
 				bi = nextbi;
 			}
 		}
-		if (bitmap_end)
-			md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
-					   RAID5_STRIPE_SECTORS(conf), 0, 0);
 		/* If we were in the middle of a write the parity block might
 		 * still be locked - so just clear all R5_LOCKED flags
 		 */
 		clear_bit(R5_LOCKED, &sh->dev[i].flags);
 	}
+	if (test_and_clear_bit(STRIPE_BITMAP_CLAIM, &sh->state)) {
+		md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
+				   RAID5_STRIPE_SECTORS(conf), 0, 0);
+	}
 	s->to_write = 0;
 	s->written = 0;
 
@@ -4059,10 +4052,6 @@ static void handle_stripe_clean_event(struct r5conf *conf,
 					bio_endio(wbi);
 					wbi = wbi2;
 				}
-				md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
-						   RAID5_STRIPE_SECTORS(conf),
-						   !test_bit(STRIPE_DEGRADED, &sh->state),
-						   0);
 				if (head_sh->batch_head) {
 					sh = list_first_entry(&sh->batch_list,
 							      struct stripe_head,
@@ -4077,7 +4066,11 @@ static void handle_stripe_clean_event(struct r5conf *conf,
 			} else if (test_bit(R5_Discard, &dev->flags))
 				discard_pending = 1;
 		}
-
+	if (test_and_clear_bit(STRIPE_BITMAP_CLAIM, &sh->state)) {
+		md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
+				   RAID5_STRIPE_SECTORS(conf),
+				   !test_bit(STRIPE_DEGRADED, &sh->state), 0);
+	}
 	log_stripe_write_finished(sh);
 
 	if (!discard_pending &&
@@ -5788,13 +5781,11 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
 		}
 		spin_unlock_irq(&sh->stripe_lock);
 		if (conf->mddev->bitmap) {
-			for (d = 0;
-			     d < conf->raid_disks - conf->max_degraded;
-			     d++)
-				md_bitmap_startwrite(mddev->bitmap,
-						     sh->sector,
-						     RAID5_STRIPE_SECTORS(conf),
-						     0);
+			set_bit(STRIPE_BITMAP_CLAIM, &sh->state);
+			md_bitmap_startwrite(mddev->bitmap,
+					     sh->sector,
+					     RAID5_STRIPE_SECTORS(conf),
+					     0);
 			sh->bm_seq = conf->seq_flush + 1;
 			set_bit(STRIPE_BIT_DELAY, &sh->state);
 		}
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 9b5a7dc3f2a0..830d5b33a4fc 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -399,6 +399,9 @@ enum {
 				 * in conf->r5c_full_stripe_list)
 				 */
 	STRIPE_R5C_PREFLUSH,	/* need to flush journal device */
+	STRIPE_BITMAP_CLAIM,	/* Has a claim on the bitmap which will need to
+				 * be released
+				 */
 };
 
 #define STRIPE_EXPAND_SYNC_FLAGS \

^ permalink raw reply related	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-09 11:35                                                                                               ` Xiao Ni
@ 2024-11-11  2:25                                                                                                 ` Yu Kuai
  2024-11-11  8:00                                                                                                 ` Christian Theune
  1 sibling, 0 replies; 88+ messages in thread
From: Yu Kuai @ 2024-11-11  2:25 UTC (permalink / raw)
  To: Xiao Ni, Yu Kuai
  Cc: Christian Theune, John Stoffel, linux-raid@vger.kernel.org,
	dm-devel, Dragan Milivojević, yangerkun@huawei.com,
	David Jeffery, yukuai (C)

Hi,

在 2024/11/09 19:35, Xiao Ni 写道:
> Maybe it's not good to put the bitmap operation from raid5 to md which
> the new api is only used for raid5. And the bitmap region which raid5
> needs to handle is based on the member disk. It should be calculated
> rather than the bio address space. Because the bio address space is
> for the whole array.

Yes, this is just a patch to test if raid5 bitmap can be balanced,
not a formal version. raid1/raid10 can move to md.c much easier, and I'm
still trying to figure out why raid5 is specail and doesn't use the
array to indicate the bitmap. And if it's possible to use the array,
I'll prefer this way so that bitmap operations can be consistent for all
levels.

> 
> We have a customer who reports a similar problem. There is a patch
> from David. I put it in the attachment.

This patch looks good general and better for backport.

Thanks,
Kuai

> 
> @Christian, can you have a try with the patch? It can be applied
> cleanly on 6.11-rc6


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-09 11:35                                                                                               ` Xiao Ni
  2024-11-11  2:25                                                                                                 ` Yu Kuai
@ 2024-11-11  8:00                                                                                                 ` Christian Theune
  2024-11-11 14:34                                                                                                   ` Christian Theune
  1 sibling, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-11  8:00 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Hi,

I’m trying this with 6.11.7 today.

Christian

> On 9. Nov 2024, at 12:35, Xiao Ni <xni@redhat.com> wrote:
> 
> On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> Hi!
>> 
>> 在 2024/11/06 14:40, Christian Theune 写道:
>>> Hi,
>>> 
>>>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> 在 2024/11/05 18:15, Christian Theune 写道:
>>>>> Hi,
>>>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
>>>> 
>>>> This is bad news :(
>>> 
>>> Yeah. But: the good new is that we aren’t eating any data so far … ;)
>>> 
>>>> While reviewing related code, I come up with a plan to move bitmap
>>>> start/end write ops to the upper layer. Make sure each write IO from
>>>> upper layer only start once and end once, this is easy to make sure
>>>> they are balanced and can avoid many calls to improve performance as
>>>> well.
>>> 
>>> Sounds like a plan!
>>> 
>>>> However, I need a few days to cooke a patch after work.
>>> 
>>> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
>> 
>> I wrote a simple and crude version, please give it a test again.
>> 
>> Thanks,
>> Kuai
>> 
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index d3a837506a36..5e1a82b79e41 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
>> struct md_rdev *rdev,
>> }
>> EXPORT_SYMBOL_GPL(md_submit_discard_bio);
>> 
>> +static bool is_raid456(struct mddev *mddev)
>> +{
>> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
>> +              mddev->pers->level == 6;
>> +}
>> +
>> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
>> +{
>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>> +               return;
>> +
>> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
>> bio_sectors(bio),
>> +                            0);
>> +}
>> +
>> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
>> sector_t sectors)
>> +{
>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>> +               return;
>> +
>> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
>> +                          bio->bi_status == BLK_STS_OK, 0);
>> +}
>> +
>> static void md_end_clone_io(struct bio *bio)
>> {
>>        struct md_io_clone *md_io_clone = bio->bi_private;
>> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
>>        if (md_io_clone->start_time)
>>                bio_end_io_acct(orig_bio, md_io_clone->start_time);
>> 
>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>        bio_put(bio);
>>        bio_endio(orig_bio);
>>        percpu_ref_put(&mddev->active_io);
>> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
>> struct bio **bio)
>>                bio_alloc_clone(bdev, *bio, GFP_NOIO,
>> &mddev->io_clone_set);
>> 
>>        md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
>> +       md_io_clone->sectors = bio_sectors(*bio);
>>        md_io_clone->orig_bio = *bio;
>>        md_io_clone->mddev = mddev;
>>        if (blk_queue_io_stat(bdev->bd_disk->queue))
>> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
>> struct bio **bio)
>> 
>> void md_account_bio(struct mddev *mddev, struct bio **bio)
>> {
>> +       bitmap_startwrite(mddev, *bio);
>>        percpu_ref_get(&mddev->active_io);
>>        md_clone_bio(mddev, bio);
>> }
>> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
>>        if (md_io_clone->start_time)
>>                bio_end_io_acct(orig_bio, md_io_clone->start_time);
>> 
>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>> +
>>        bio_put(bio);
>>        percpu_ref_put(&mddev->active_io);
>> }
>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>> index a0d6827dced9..0c2794230e0a 100644
>> --- a/drivers/md/md.h
>> +++ b/drivers/md/md.h
>> @@ -837,6 +837,7 @@ struct md_io_clone {
>>        struct mddev    *mddev;
>>        struct bio      *orig_bio;
>>        unsigned long   start_time;
>> +       sector_t        sectors;
>>        struct bio      bio_clone;
>> };
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index c14cf2410365..4f009e32f68a 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
>> *sh, struct bio *bi,
>>                 * is added to a batch, STRIPE_BIT_DELAY cannot be changed
>>                 * any more.
>>                 */
>> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
>> -               spin_unlock_irq(&sh->stripe_lock);
>> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
>> -                                    RAID5_STRIPE_SECTORS(conf), 0);
>> -               spin_lock_irq(&sh->stripe_lock);
>> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>                if (!sh->batch_head) {
>>                        sh->bm_seq = conf->seq_flush+1;
>>                        set_bit(STRIPE_BIT_DELAY, &sh->state);
>> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>> stripe_head *sh,
>>        BUG_ON(sh->batch_head);
>>        for (i = disks; i--; ) {
>>                struct bio *bi;
>> -               int bitmap_end = 0;
>> 
>>                if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
>>                        struct md_rdev *rdev = conf->disks[i].rdev;
>> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>> stripe_head *sh,
>>                sh->dev[i].towrite = NULL;
>>                sh->overwrite_disks = 0;
>>                spin_unlock_irq(&sh->stripe_lock);
>> -               if (bi)
>> -                       bitmap_end = 1;
>> 
>>                log_stripe_write_finished(sh);
>> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>> stripe_head *sh,
>>                        bio_io_error(bi);
>>                        bi = nextbi;
>>                }
>> -               if (bitmap_end)
>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>> -                                          RAID5_STRIPE_SECTORS(conf),
>> 0, 0);
>> -               bitmap_end = 0;
>>                /* and fail all 'written' */
>>                bi = sh->dev[i].written;
>>                sh->dev[i].written = NULL;
>> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>> stripe_head *sh,
>>                        sh->dev[i].page = sh->dev[i].orig_page;
>>                }
>> 
>> -               if (bi) bitmap_end = 1;
>>                while (bi && bi->bi_iter.bi_sector <
>>                       sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
>>                        struct bio *bi2 = r5_next_bio(conf, bi,
>> sh->dev[i].sector);
>> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>> stripe_head *sh,
>>                                bi = nextbi;
>>                        }
>>                }
>> -               if (bitmap_end)
>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>> -                                          RAID5_STRIPE_SECTORS(conf),
>> 0, 0);
>>                /* If we were in the middle of a write the parity block
>> might
>>                 * still be locked - so just clear all R5_LOCKED flags
>>                 */
>> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
>> r5conf *conf,
>>                                        bio_endio(wbi);
>>                                        wbi = wbi2;
>>                                }
>> -                               md_bitmap_endwrite(conf->mddev->bitmap,
>> sh->sector,
>> -
>> RAID5_STRIPE_SECTORS(conf),
>> -
>> !test_bit(STRIPE_DEGRADED, &sh->state),
>> -                                                  0);
>>                                if (head_sh->batch_head) {
>>                                        sh =
>> list_first_entry(&sh->batch_list,
>>                                                              struct
>> stripe_head,
>> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
>> *mddev, struct bio *bi)
>>                }
>>                spin_unlock_irq(&sh->stripe_lock);
>>                if (conf->mddev->bitmap) {
>> -                       for (d = 0;
>> -                            d < conf->raid_disks - conf->max_degraded;
>> -                            d++)
>> -                               md_bitmap_startwrite(mddev->bitmap,
>> -                                                    sh->sector,
>> -
>> RAID5_STRIPE_SECTORS(conf),
>> -                                                    0);
>>                        sh->bm_seq = conf->seq_flush + 1;
>>                        set_bit(STRIPE_BIT_DELAY, &sh->state);
>>                }
>> 
>> 
>> 
>>> 
>>> Thanks a lot for your help!
>>> Christian
>>> 
>> 
>> 
> 
> Hi Kuai
> 
> Maybe it's not good to put the bitmap operation from raid5 to md which
> the new api is only used for raid5. And the bitmap region which raid5
> needs to handle is based on the member disk. It should be calculated
> rather than the bio address space. Because the bio address space is
> for the whole array.
> 
> We have a customer who reports a similar problem. There is a patch
> from David. I put it in the attachment.
> 
> @Christian, can you have a try with the patch? It can be applied
> cleanly on 6.11-rc6
> 
> Regards
> Xiao
> <md_raid5_one_bitmap_claim_per_stripe_head.patch>

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-11  8:00                                                                                                 ` Christian Theune
@ 2024-11-11 14:34                                                                                                   ` Christian Theune
  2024-11-12  6:57                                                                                                     ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-11 14:34 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

I’ve been running withy my workflow that is known to trigger the issue reliably for about 6 hours now. This is longer than it worked before.
I’m leaving the office for today and will leave things running over night and report back tomorrow.

Christian

> On 11. Nov 2024, at 09:00, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
> I’m trying this with 6.11.7 today.
> 
> Christian
> 
>> On 9. Nov 2024, at 12:35, Xiao Ni <xni@redhat.com> wrote:
>> 
>> On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> 
>>> Hi!
>>> 
>>> 在 2024/11/06 14:40, Christian Theune 写道:
>>>> Hi,
>>>> 
>>>>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> 在 2024/11/05 18:15, Christian Theune 写道:
>>>>>> Hi,
>>>>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
>>>>> 
>>>>> This is bad news :(
>>>> 
>>>> Yeah. But: the good new is that we aren’t eating any data so far … ;)
>>>> 
>>>>> While reviewing related code, I come up with a plan to move bitmap
>>>>> start/end write ops to the upper layer. Make sure each write IO from
>>>>> upper layer only start once and end once, this is easy to make sure
>>>>> they are balanced and can avoid many calls to improve performance as
>>>>> well.
>>>> 
>>>> Sounds like a plan!
>>>> 
>>>>> However, I need a few days to cooke a patch after work.
>>>> 
>>>> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
>>> 
>>> I wrote a simple and crude version, please give it a test again.
>>> 
>>> Thanks,
>>> Kuai
>>> 
>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>>> index d3a837506a36..5e1a82b79e41 100644
>>> --- a/drivers/md/md.c
>>> +++ b/drivers/md/md.c
>>> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
>>> struct md_rdev *rdev,
>>> }
>>> EXPORT_SYMBOL_GPL(md_submit_discard_bio);
>>> 
>>> +static bool is_raid456(struct mddev *mddev)
>>> +{
>>> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
>>> +              mddev->pers->level == 6;
>>> +}
>>> +
>>> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
>>> +{
>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>> +               return;
>>> +
>>> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
>>> bio_sectors(bio),
>>> +                            0);
>>> +}
>>> +
>>> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
>>> sector_t sectors)
>>> +{
>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>> +               return;
>>> +
>>> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
>>> +                          bio->bi_status == BLK_STS_OK, 0);
>>> +}
>>> +
>>> static void md_end_clone_io(struct bio *bio)
>>> {
>>>       struct md_io_clone *md_io_clone = bio->bi_private;
>>> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
>>>       if (md_io_clone->start_time)
>>>               bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>> 
>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>>       bio_put(bio);
>>>       bio_endio(orig_bio);
>>>       percpu_ref_put(&mddev->active_io);
>>> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
>>> struct bio **bio)
>>>               bio_alloc_clone(bdev, *bio, GFP_NOIO,
>>> &mddev->io_clone_set);
>>> 
>>>       md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
>>> +       md_io_clone->sectors = bio_sectors(*bio);
>>>       md_io_clone->orig_bio = *bio;
>>>       md_io_clone->mddev = mddev;
>>>       if (blk_queue_io_stat(bdev->bd_disk->queue))
>>> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
>>> struct bio **bio)
>>> 
>>> void md_account_bio(struct mddev *mddev, struct bio **bio)
>>> {
>>> +       bitmap_startwrite(mddev, *bio);
>>>       percpu_ref_get(&mddev->active_io);
>>>       md_clone_bio(mddev, bio);
>>> }
>>> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
>>>       if (md_io_clone->start_time)
>>>               bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>> 
>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>> +
>>>       bio_put(bio);
>>>       percpu_ref_put(&mddev->active_io);
>>> }
>>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>>> index a0d6827dced9..0c2794230e0a 100644
>>> --- a/drivers/md/md.h
>>> +++ b/drivers/md/md.h
>>> @@ -837,6 +837,7 @@ struct md_io_clone {
>>>       struct mddev    *mddev;
>>>       struct bio      *orig_bio;
>>>       unsigned long   start_time;
>>> +       sector_t        sectors;
>>>       struct bio      bio_clone;
>>> };
>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>> index c14cf2410365..4f009e32f68a 100644
>>> --- a/drivers/md/raid5.c
>>> +++ b/drivers/md/raid5.c
>>> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
>>> *sh, struct bio *bi,
>>>                * is added to a batch, STRIPE_BIT_DELAY cannot be changed
>>>                * any more.
>>>                */
>>> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>> -               spin_unlock_irq(&sh->stripe_lock);
>>> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
>>> -                                    RAID5_STRIPE_SECTORS(conf), 0);
>>> -               spin_lock_irq(&sh->stripe_lock);
>>> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>>               if (!sh->batch_head) {
>>>                       sh->bm_seq = conf->seq_flush+1;
>>>                       set_bit(STRIPE_BIT_DELAY, &sh->state);
>>> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>> stripe_head *sh,
>>>       BUG_ON(sh->batch_head);
>>>       for (i = disks; i--; ) {
>>>               struct bio *bi;
>>> -               int bitmap_end = 0;
>>> 
>>>               if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
>>>                       struct md_rdev *rdev = conf->disks[i].rdev;
>>> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>> stripe_head *sh,
>>>               sh->dev[i].towrite = NULL;
>>>               sh->overwrite_disks = 0;
>>>               spin_unlock_irq(&sh->stripe_lock);
>>> -               if (bi)
>>> -                       bitmap_end = 1;
>>> 
>>>               log_stripe_write_finished(sh);
>>> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>> stripe_head *sh,
>>>                       bio_io_error(bi);
>>>                       bi = nextbi;
>>>               }
>>> -               if (bitmap_end)
>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>> 0, 0);
>>> -               bitmap_end = 0;
>>>               /* and fail all 'written' */
>>>               bi = sh->dev[i].written;
>>>               sh->dev[i].written = NULL;
>>> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>> stripe_head *sh,
>>>                       sh->dev[i].page = sh->dev[i].orig_page;
>>>               }
>>> 
>>> -               if (bi) bitmap_end = 1;
>>>               while (bi && bi->bi_iter.bi_sector <
>>>                      sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
>>>                       struct bio *bi2 = r5_next_bio(conf, bi,
>>> sh->dev[i].sector);
>>> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>> stripe_head *sh,
>>>                               bi = nextbi;
>>>                       }
>>>               }
>>> -               if (bitmap_end)
>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>> 0, 0);
>>>               /* If we were in the middle of a write the parity block
>>> might
>>>                * still be locked - so just clear all R5_LOCKED flags
>>>                */
>>> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
>>> r5conf *conf,
>>>                                       bio_endio(wbi);
>>>                                       wbi = wbi2;
>>>                               }
>>> -                               md_bitmap_endwrite(conf->mddev->bitmap,
>>> sh->sector,
>>> -
>>> RAID5_STRIPE_SECTORS(conf),
>>> -
>>> !test_bit(STRIPE_DEGRADED, &sh->state),
>>> -                                                  0);
>>>                               if (head_sh->batch_head) {
>>>                                       sh =
>>> list_first_entry(&sh->batch_list,
>>>                                                             struct
>>> stripe_head,
>>> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
>>> *mddev, struct bio *bi)
>>>               }
>>>               spin_unlock_irq(&sh->stripe_lock);
>>>               if (conf->mddev->bitmap) {
>>> -                       for (d = 0;
>>> -                            d < conf->raid_disks - conf->max_degraded;
>>> -                            d++)
>>> -                               md_bitmap_startwrite(mddev->bitmap,
>>> -                                                    sh->sector,
>>> -
>>> RAID5_STRIPE_SECTORS(conf),
>>> -                                                    0);
>>>                       sh->bm_seq = conf->seq_flush + 1;
>>>                       set_bit(STRIPE_BIT_DELAY, &sh->state);
>>>               }
>>> 
>>> 
>>> 
>>>> 
>>>> Thanks a lot for your help!
>>>> Christian
>>>> 
>>> 
>>> 
>> 
>> Hi Kuai
>> 
>> Maybe it's not good to put the bitmap operation from raid5 to md which
>> the new api is only used for raid5. And the bitmap region which raid5
>> needs to handle is based on the member disk. It should be calculated
>> rather than the bio address space. Because the bio address space is
>> for the whole array.
>> 
>> We have a customer who reports a similar problem. There is a patch
>> from David. I put it in the attachment.
>> 
>> @Christian, can you have a try with the patch? It can be applied
>> cleanly on 6.11-rc6
>> 
>> Regards
>> Xiao
>> <md_raid5_one_bitmap_claim_per_stripe_head.patch>
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-11 14:34                                                                                                   ` Christian Theune
@ 2024-11-12  6:57                                                                                                     ` Christian Theune
  2024-11-14 15:07                                                                                                       ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-12  6:57 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Hi,

my workload has been running for 22 hours now successfully - it seems that the patch works.

If this gets accepted then I’d kindly ask for an LTS backport to 6.6.

Thanks to everyone for helping figuring this out and fixing it!

Christian

> On 11. Nov 2024, at 15:34, Christian Theune <ct@flyingcircus.io> wrote:
> 
> I’ve been running withy my workflow that is known to trigger the issue reliably for about 6 hours now. This is longer than it worked before.
> I’m leaving the office for today and will leave things running over night and report back tomorrow.
> 
> Christian
> 
>> On 11. Nov 2024, at 09:00, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>> I’m trying this with 6.11.7 today.
>> 
>> Christian
>> 
>>> On 9. Nov 2024, at 12:35, Xiao Ni <xni@redhat.com> wrote:
>>> 
>>> On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>> 
>>>> Hi!
>>>> 
>>>> 在 2024/11/06 14:40, Christian Theune 写道:
>>>>> Hi,
>>>>> 
>>>>>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> 在 2024/11/05 18:15, Christian Theune 写道:
>>>>>>> Hi,
>>>>>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
>>>>>> 
>>>>>> This is bad news :(
>>>>> 
>>>>> Yeah. But: the good new is that we aren’t eating any data so far … ;)
>>>>> 
>>>>>> While reviewing related code, I come up with a plan to move bitmap
>>>>>> start/end write ops to the upper layer. Make sure each write IO from
>>>>>> upper layer only start once and end once, this is easy to make sure
>>>>>> they are balanced and can avoid many calls to improve performance as
>>>>>> well.
>>>>> 
>>>>> Sounds like a plan!
>>>>> 
>>>>>> However, I need a few days to cooke a patch after work.
>>>>> 
>>>>> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
>>>> 
>>>> I wrote a simple and crude version, please give it a test again.
>>>> 
>>>> Thanks,
>>>> Kuai
>>>> 
>>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>>>> index d3a837506a36..5e1a82b79e41 100644
>>>> --- a/drivers/md/md.c
>>>> +++ b/drivers/md/md.c
>>>> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
>>>> struct md_rdev *rdev,
>>>> }
>>>> EXPORT_SYMBOL_GPL(md_submit_discard_bio);
>>>> 
>>>> +static bool is_raid456(struct mddev *mddev)
>>>> +{
>>>> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
>>>> +              mddev->pers->level == 6;
>>>> +}
>>>> +
>>>> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
>>>> +{
>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>>> +               return;
>>>> +
>>>> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
>>>> bio_sectors(bio),
>>>> +                            0);
>>>> +}
>>>> +
>>>> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
>>>> sector_t sectors)
>>>> +{
>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>>> +               return;
>>>> +
>>>> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
>>>> +                          bio->bi_status == BLK_STS_OK, 0);
>>>> +}
>>>> +
>>>> static void md_end_clone_io(struct bio *bio)
>>>> {
>>>>      struct md_io_clone *md_io_clone = bio->bi_private;
>>>> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
>>>>      if (md_io_clone->start_time)
>>>>              bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>>> 
>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>>>      bio_put(bio);
>>>>      bio_endio(orig_bio);
>>>>      percpu_ref_put(&mddev->active_io);
>>>> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
>>>> struct bio **bio)
>>>>              bio_alloc_clone(bdev, *bio, GFP_NOIO,
>>>> &mddev->io_clone_set);
>>>> 
>>>>      md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
>>>> +       md_io_clone->sectors = bio_sectors(*bio);
>>>>      md_io_clone->orig_bio = *bio;
>>>>      md_io_clone->mddev = mddev;
>>>>      if (blk_queue_io_stat(bdev->bd_disk->queue))
>>>> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
>>>> struct bio **bio)
>>>> 
>>>> void md_account_bio(struct mddev *mddev, struct bio **bio)
>>>> {
>>>> +       bitmap_startwrite(mddev, *bio);
>>>>      percpu_ref_get(&mddev->active_io);
>>>>      md_clone_bio(mddev, bio);
>>>> }
>>>> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
>>>>      if (md_io_clone->start_time)
>>>>              bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>>> 
>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>>> +
>>>>      bio_put(bio);
>>>>      percpu_ref_put(&mddev->active_io);
>>>> }
>>>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>>>> index a0d6827dced9..0c2794230e0a 100644
>>>> --- a/drivers/md/md.h
>>>> +++ b/drivers/md/md.h
>>>> @@ -837,6 +837,7 @@ struct md_io_clone {
>>>>      struct mddev    *mddev;
>>>>      struct bio      *orig_bio;
>>>>      unsigned long   start_time;
>>>> +       sector_t        sectors;
>>>>      struct bio      bio_clone;
>>>> };
>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>>> index c14cf2410365..4f009e32f68a 100644
>>>> --- a/drivers/md/raid5.c
>>>> +++ b/drivers/md/raid5.c
>>>> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
>>>> *sh, struct bio *bi,
>>>>               * is added to a batch, STRIPE_BIT_DELAY cannot be changed
>>>>               * any more.
>>>>               */
>>>> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>>> -               spin_unlock_irq(&sh->stripe_lock);
>>>> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
>>>> -                                    RAID5_STRIPE_SECTORS(conf), 0);
>>>> -               spin_lock_irq(&sh->stripe_lock);
>>>> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>>>              if (!sh->batch_head) {
>>>>                      sh->bm_seq = conf->seq_flush+1;
>>>>                      set_bit(STRIPE_BIT_DELAY, &sh->state);
>>>> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>> stripe_head *sh,
>>>>      BUG_ON(sh->batch_head);
>>>>      for (i = disks; i--; ) {
>>>>              struct bio *bi;
>>>> -               int bitmap_end = 0;
>>>> 
>>>>              if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
>>>>                      struct md_rdev *rdev = conf->disks[i].rdev;
>>>> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>> stripe_head *sh,
>>>>              sh->dev[i].towrite = NULL;
>>>>              sh->overwrite_disks = 0;
>>>>              spin_unlock_irq(&sh->stripe_lock);
>>>> -               if (bi)
>>>> -                       bitmap_end = 1;
>>>> 
>>>>              log_stripe_write_finished(sh);
>>>> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>> stripe_head *sh,
>>>>                      bio_io_error(bi);
>>>>                      bi = nextbi;
>>>>              }
>>>> -               if (bitmap_end)
>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>>> 0, 0);
>>>> -               bitmap_end = 0;
>>>>              /* and fail all 'written' */
>>>>              bi = sh->dev[i].written;
>>>>              sh->dev[i].written = NULL;
>>>> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>> stripe_head *sh,
>>>>                      sh->dev[i].page = sh->dev[i].orig_page;
>>>>              }
>>>> 
>>>> -               if (bi) bitmap_end = 1;
>>>>              while (bi && bi->bi_iter.bi_sector <
>>>>                     sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
>>>>                      struct bio *bi2 = r5_next_bio(conf, bi,
>>>> sh->dev[i].sector);
>>>> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>> stripe_head *sh,
>>>>                              bi = nextbi;
>>>>                      }
>>>>              }
>>>> -               if (bitmap_end)
>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>>> 0, 0);
>>>>              /* If we were in the middle of a write the parity block
>>>> might
>>>>               * still be locked - so just clear all R5_LOCKED flags
>>>>               */
>>>> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
>>>> r5conf *conf,
>>>>                                      bio_endio(wbi);
>>>>                                      wbi = wbi2;
>>>>                              }
>>>> -                               md_bitmap_endwrite(conf->mddev->bitmap,
>>>> sh->sector,
>>>> -
>>>> RAID5_STRIPE_SECTORS(conf),
>>>> -
>>>> !test_bit(STRIPE_DEGRADED, &sh->state),
>>>> -                                                  0);
>>>>                              if (head_sh->batch_head) {
>>>>                                      sh =
>>>> list_first_entry(&sh->batch_list,
>>>>                                                            struct
>>>> stripe_head,
>>>> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
>>>> *mddev, struct bio *bi)
>>>>              }
>>>>              spin_unlock_irq(&sh->stripe_lock);
>>>>              if (conf->mddev->bitmap) {
>>>> -                       for (d = 0;
>>>> -                            d < conf->raid_disks - conf->max_degraded;
>>>> -                            d++)
>>>> -                               md_bitmap_startwrite(mddev->bitmap,
>>>> -                                                    sh->sector,
>>>> -
>>>> RAID5_STRIPE_SECTORS(conf),
>>>> -                                                    0);
>>>>                      sh->bm_seq = conf->seq_flush + 1;
>>>>                      set_bit(STRIPE_BIT_DELAY, &sh->state);
>>>>              }
>>>> 
>>>> 
>>>> 
>>>>> 
>>>>> Thanks a lot for your help!
>>>>> Christian
>>>>> 
>>>> 
>>>> 
>>> 
>>> Hi Kuai
>>> 
>>> Maybe it's not good to put the bitmap operation from raid5 to md which
>>> the new api is only used for raid5. And the bitmap region which raid5
>>> needs to handle is based on the member disk. It should be calculated
>>> rather than the bio address space. Because the bio address space is
>>> for the whole array.
>>> 
>>> We have a customer who reports a similar problem. There is a patch
>>> from David. I put it in the attachment.
>>> 
>>> @Christian, can you have a try with the patch? It can be applied
>>> cleanly on 6.11-rc6
>>> 
>>> Regards
>>> Xiao
>>> <md_raid5_one_bitmap_claim_per_stripe_head.patch>
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-12  6:57                                                                                                     ` Christian Theune
@ 2024-11-14 15:07                                                                                                       ` Christian Theune
  2024-11-15  8:07                                                                                                         ` Xiao Ni
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-14 15:07 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Hi,

just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?

Christian

> On 12. Nov 2024, at 07:57, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
> my workload has been running for 22 hours now successfully - it seems that the patch works.
> 
> If this gets accepted then I’d kindly ask for an LTS backport to 6.6.
> 
> Thanks to everyone for helping figuring this out and fixing it!
> 
> Christian
> 
>> On 11. Nov 2024, at 15:34, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> I’ve been running withy my workflow that is known to trigger the issue reliably for about 6 hours now. This is longer than it worked before.
>> I’m leaving the office for today and will leave things running over night and report back tomorrow.
>> 
>> Christian
>> 
>>> On 11. Nov 2024, at 09:00, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Hi,
>>> 
>>> I’m trying this with 6.11.7 today.
>>> 
>>> Christian
>>> 
>>>> On 9. Nov 2024, at 12:35, Xiao Ni <xni@redhat.com> wrote:
>>>> 
>>>> On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>> 
>>>>> Hi!
>>>>> 
>>>>> 在 2024/11/06 14:40, Christian Theune 写道:
>>>>>> Hi,
>>>>>> 
>>>>>>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> 在 2024/11/05 18:15, Christian Theune 写道:
>>>>>>>> Hi,
>>>>>>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
>>>>>>> 
>>>>>>> This is bad news :(
>>>>>> 
>>>>>> Yeah. But: the good new is that we aren’t eating any data so far … ;)
>>>>>> 
>>>>>>> While reviewing related code, I come up with a plan to move bitmap
>>>>>>> start/end write ops to the upper layer. Make sure each write IO from
>>>>>>> upper layer only start once and end once, this is easy to make sure
>>>>>>> they are balanced and can avoid many calls to improve performance as
>>>>>>> well.
>>>>>> 
>>>>>> Sounds like a plan!
>>>>>> 
>>>>>>> However, I need a few days to cooke a patch after work.
>>>>>> 
>>>>>> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
>>>>> 
>>>>> I wrote a simple and crude version, please give it a test again.
>>>>> 
>>>>> Thanks,
>>>>> Kuai
>>>>> 
>>>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>>>>> index d3a837506a36..5e1a82b79e41 100644
>>>>> --- a/drivers/md/md.c
>>>>> +++ b/drivers/md/md.c
>>>>> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
>>>>> struct md_rdev *rdev,
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(md_submit_discard_bio);
>>>>> 
>>>>> +static bool is_raid456(struct mddev *mddev)
>>>>> +{
>>>>> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
>>>>> +              mddev->pers->level == 6;
>>>>> +}
>>>>> +
>>>>> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
>>>>> +{
>>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>>>> +               return;
>>>>> +
>>>>> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
>>>>> bio_sectors(bio),
>>>>> +                            0);
>>>>> +}
>>>>> +
>>>>> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
>>>>> sector_t sectors)
>>>>> +{
>>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
>>>>> +               return;
>>>>> +
>>>>> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
>>>>> +                          bio->bi_status == BLK_STS_OK, 0);
>>>>> +}
>>>>> +
>>>>> static void md_end_clone_io(struct bio *bio)
>>>>> {
>>>>>     struct md_io_clone *md_io_clone = bio->bi_private;
>>>>> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
>>>>>     if (md_io_clone->start_time)
>>>>>             bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>>>> 
>>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>>>>     bio_put(bio);
>>>>>     bio_endio(orig_bio);
>>>>>     percpu_ref_put(&mddev->active_io);
>>>>> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
>>>>> struct bio **bio)
>>>>>             bio_alloc_clone(bdev, *bio, GFP_NOIO,
>>>>> &mddev->io_clone_set);
>>>>> 
>>>>>     md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
>>>>> +       md_io_clone->sectors = bio_sectors(*bio);
>>>>>     md_io_clone->orig_bio = *bio;
>>>>>     md_io_clone->mddev = mddev;
>>>>>     if (blk_queue_io_stat(bdev->bd_disk->queue))
>>>>> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
>>>>> struct bio **bio)
>>>>> 
>>>>> void md_account_bio(struct mddev *mddev, struct bio **bio)
>>>>> {
>>>>> +       bitmap_startwrite(mddev, *bio);
>>>>>     percpu_ref_get(&mddev->active_io);
>>>>>     md_clone_bio(mddev, bio);
>>>>> }
>>>>> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
>>>>>     if (md_io_clone->start_time)
>>>>>             bio_end_io_acct(orig_bio, md_io_clone->start_time);
>>>>> 
>>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
>>>>> +
>>>>>     bio_put(bio);
>>>>>     percpu_ref_put(&mddev->active_io);
>>>>> }
>>>>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>>>>> index a0d6827dced9..0c2794230e0a 100644
>>>>> --- a/drivers/md/md.h
>>>>> +++ b/drivers/md/md.h
>>>>> @@ -837,6 +837,7 @@ struct md_io_clone {
>>>>>     struct mddev    *mddev;
>>>>>     struct bio      *orig_bio;
>>>>>     unsigned long   start_time;
>>>>> +       sector_t        sectors;
>>>>>     struct bio      bio_clone;
>>>>> };
>>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>>>> index c14cf2410365..4f009e32f68a 100644
>>>>> --- a/drivers/md/raid5.c
>>>>> +++ b/drivers/md/raid5.c
>>>>> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
>>>>> *sh, struct bio *bi,
>>>>>              * is added to a batch, STRIPE_BIT_DELAY cannot be changed
>>>>>              * any more.
>>>>>              */
>>>>> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>>>> -               spin_unlock_irq(&sh->stripe_lock);
>>>>> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
>>>>> -                                    RAID5_STRIPE_SECTORS(conf), 0);
>>>>> -               spin_lock_irq(&sh->stripe_lock);
>>>>> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
>>>>>             if (!sh->batch_head) {
>>>>>                     sh->bm_seq = conf->seq_flush+1;
>>>>>                     set_bit(STRIPE_BIT_DELAY, &sh->state);
>>>>> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>>> stripe_head *sh,
>>>>>     BUG_ON(sh->batch_head);
>>>>>     for (i = disks; i--; ) {
>>>>>             struct bio *bi;
>>>>> -               int bitmap_end = 0;
>>>>> 
>>>>>             if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
>>>>>                     struct md_rdev *rdev = conf->disks[i].rdev;
>>>>> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>>> stripe_head *sh,
>>>>>             sh->dev[i].towrite = NULL;
>>>>>             sh->overwrite_disks = 0;
>>>>>             spin_unlock_irq(&sh->stripe_lock);
>>>>> -               if (bi)
>>>>> -                       bitmap_end = 1;
>>>>> 
>>>>>             log_stripe_write_finished(sh);
>>>>> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>>> stripe_head *sh,
>>>>>                     bio_io_error(bi);
>>>>>                     bi = nextbi;
>>>>>             }
>>>>> -               if (bitmap_end)
>>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>>>> 0, 0);
>>>>> -               bitmap_end = 0;
>>>>>             /* and fail all 'written' */
>>>>>             bi = sh->dev[i].written;
>>>>>             sh->dev[i].written = NULL;
>>>>> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>>> stripe_head *sh,
>>>>>                     sh->dev[i].page = sh->dev[i].orig_page;
>>>>>             }
>>>>> 
>>>>> -               if (bi) bitmap_end = 1;
>>>>>             while (bi && bi->bi_iter.bi_sector <
>>>>>                    sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
>>>>>                     struct bio *bi2 = r5_next_bio(conf, bi,
>>>>> sh->dev[i].sector);
>>>>> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
>>>>> stripe_head *sh,
>>>>>                             bi = nextbi;
>>>>>                     }
>>>>>             }
>>>>> -               if (bitmap_end)
>>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
>>>>> -                                          RAID5_STRIPE_SECTORS(conf),
>>>>> 0, 0);
>>>>>             /* If we were in the middle of a write the parity block
>>>>> might
>>>>>              * still be locked - so just clear all R5_LOCKED flags
>>>>>              */
>>>>> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
>>>>> r5conf *conf,
>>>>>                                     bio_endio(wbi);
>>>>>                                     wbi = wbi2;
>>>>>                             }
>>>>> -                               md_bitmap_endwrite(conf->mddev->bitmap,
>>>>> sh->sector,
>>>>> -
>>>>> RAID5_STRIPE_SECTORS(conf),
>>>>> -
>>>>> !test_bit(STRIPE_DEGRADED, &sh->state),
>>>>> -                                                  0);
>>>>>                             if (head_sh->batch_head) {
>>>>>                                     sh =
>>>>> list_first_entry(&sh->batch_list,
>>>>>                                                           struct
>>>>> stripe_head,
>>>>> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
>>>>> *mddev, struct bio *bi)
>>>>>             }
>>>>>             spin_unlock_irq(&sh->stripe_lock);
>>>>>             if (conf->mddev->bitmap) {
>>>>> -                       for (d = 0;
>>>>> -                            d < conf->raid_disks - conf->max_degraded;
>>>>> -                            d++)
>>>>> -                               md_bitmap_startwrite(mddev->bitmap,
>>>>> -                                                    sh->sector,
>>>>> -
>>>>> RAID5_STRIPE_SECTORS(conf),
>>>>> -                                                    0);
>>>>>                     sh->bm_seq = conf->seq_flush + 1;
>>>>>                     set_bit(STRIPE_BIT_DELAY, &sh->state);
>>>>>             }
>>>>> 
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Thanks a lot for your help!
>>>>>> Christian
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> Hi Kuai
>>>> 
>>>> Maybe it's not good to put the bitmap operation from raid5 to md which
>>>> the new api is only used for raid5. And the bitmap region which raid5
>>>> needs to handle is based on the member disk. It should be calculated
>>>> rather than the bio address space. Because the bio address space is
>>>> for the whole array.
>>>> 
>>>> We have a customer who reports a similar problem. There is a patch
>>>> from David. I put it in the attachment.
>>>> 
>>>> @Christian, can you have a try with the patch? It can be applied
>>>> cleanly on 6.11-rc6
>>>> 
>>>> Regards
>>>> Xiao
>>>> <md_raid5_one_bitmap_claim_per_stripe_head.patch>
>>> 
>>> Liebe Grüße,
>>> Christian Theune
>>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> 
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-14 15:07                                                                                                       ` Christian Theune
@ 2024-11-15  8:07                                                                                                         ` Xiao Ni
  2024-11-15  8:44                                                                                                           ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Xiao Ni @ 2024-11-15  8:07 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>
> Hi,
>
> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>
> Christian

Hi Christian

Beside the bitmap stuck problem, the other thread has a new problem.
But it looks like you don't have the new problem because you already
ran without failure for 2 days. I'll send patches against 6.13 and
6.11.

Regards
Xiao

>
> > On 12. Nov 2024, at 07:57, Christian Theune <ct@flyingcircus.io> wrote:
> >
> > Hi,
> >
> > my workload has been running for 22 hours now successfully - it seems that the patch works.
> >
> > If this gets accepted then I’d kindly ask for an LTS backport to 6.6.
> >
> > Thanks to everyone for helping figuring this out and fixing it!
> >
> > Christian
> >
> >> On 11. Nov 2024, at 15:34, Christian Theune <ct@flyingcircus.io> wrote:
> >>
> >> I’ve been running withy my workflow that is known to trigger the issue reliably for about 6 hours now. This is longer than it worked before.
> >> I’m leaving the office for today and will leave things running over night and report back tomorrow.
> >>
> >> Christian
> >>
> >>> On 11. Nov 2024, at 09:00, Christian Theune <ct@flyingcircus.io> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I’m trying this with 6.11.7 today.
> >>>
> >>> Christian
> >>>
> >>>> On 9. Nov 2024, at 12:35, Xiao Ni <xni@redhat.com> wrote:
> >>>>
> >>>> On Thu, Nov 7, 2024 at 3:55 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>>>>
> >>>>> Hi!
> >>>>>
> >>>>> 在 2024/11/06 14:40, Christian Theune 写道:
> >>>>>> Hi,
> >>>>>>
> >>>>>>> On 6. Nov 2024, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> 在 2024/11/05 18:15, Christian Theune 写道:
> >>>>>>>> Hi,
> >>>>>>>> after about 2 hours it stalled again. Here’s the full blocked process dump. (Tell me if this isn’t helpful, otherwise I’ll keep posting that as it’s the only real data I can show)
> >>>>>>>
> >>>>>>> This is bad news :(
> >>>>>>
> >>>>>> Yeah. But: the good new is that we aren’t eating any data so far … ;)
> >>>>>>
> >>>>>>> While reviewing related code, I come up with a plan to move bitmap
> >>>>>>> start/end write ops to the upper layer. Make sure each write IO from
> >>>>>>> upper layer only start once and end once, this is easy to make sure
> >>>>>>> they are balanced and can avoid many calls to improve performance as
> >>>>>>> well.
> >>>>>>
> >>>>>> Sounds like a plan!
> >>>>>>
> >>>>>>> However, I need a few days to cooke a patch after work.
> >>>>>>
> >>>>>> Sure thing! I’ll switch off bitmaps for that time - I’m happy we found a workaround so we can take time to resolve it cleanly. :)
> >>>>>
> >>>>> I wrote a simple and crude version, please give it a test again.
> >>>>>
> >>>>> Thanks,
> >>>>> Kuai
> >>>>>
> >>>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
> >>>>> index d3a837506a36..5e1a82b79e41 100644
> >>>>> --- a/drivers/md/md.c
> >>>>> +++ b/drivers/md/md.c
> >>>>> @@ -8753,6 +8753,30 @@ void md_submit_discard_bio(struct mddev *mddev,
> >>>>> struct md_rdev *rdev,
> >>>>> }
> >>>>> EXPORT_SYMBOL_GPL(md_submit_discard_bio);
> >>>>>
> >>>>> +static bool is_raid456(struct mddev *mddev)
> >>>>> +{
> >>>>> +       return mddev->pers->level == 4 || mddev->pers->level == 5 ||
> >>>>> +              mddev->pers->level == 6;
> >>>>> +}
> >>>>> +
> >>>>> +static void bitmap_startwrite(struct mddev *mddev, struct bio *bio)
> >>>>> +{
> >>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
> >>>>> +               return;
> >>>>> +
> >>>>> +       md_bitmap_startwrite(mddev->bitmap, bio_offset(bio),
> >>>>> bio_sectors(bio),
> >>>>> +                            0);
> >>>>> +}
> >>>>> +
> >>>>> +static void bitmap_endwrite(struct mddev *mddev, struct bio *bio,
> >>>>> sector_t sectors)
> >>>>> +{
> >>>>> +       if (!is_raid456(mddev) || !mddev->bitmap)
> >>>>> +               return;
> >>>>> +
> >>>>> +       md_bitmap_endwrite(mddev->bitmap, bio_offset(bio), sectors,o
> >>>>> +                          bio->bi_status == BLK_STS_OK, 0);
> >>>>> +}
> >>>>> +
> >>>>> static void md_end_clone_io(struct bio *bio)
> >>>>> {
> >>>>>     struct md_io_clone *md_io_clone = bio->bi_private;
> >>>>> @@ -8765,6 +8789,7 @@ static void md_end_clone_io(struct bio *bio)
> >>>>>     if (md_io_clone->start_time)
> >>>>>             bio_end_io_acct(orig_bio, md_io_clone->start_time);
> >>>>>
> >>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
> >>>>>     bio_put(bio);
> >>>>>     bio_endio(orig_bio);
> >>>>>     percpu_ref_put(&mddev->active_io);
> >>>>> @@ -8778,6 +8803,7 @@ static void md_clone_bio(struct mddev *mddev,
> >>>>> struct bio **bio)
> >>>>>             bio_alloc_clone(bdev, *bio, GFP_NOIO,
> >>>>> &mddev->io_clone_set);
> >>>>>
> >>>>>     md_io_clone = container_of(clone, struct md_io_clone, bio_clone);
> >>>>> +       md_io_clone->sectors = bio_sectors(*bio);
> >>>>>     md_io_clone->orig_bio = *bio;
> >>>>>     md_io_clone->mddev = mddev;
> >>>>>     if (blk_queue_io_stat(bdev->bd_disk->queue))
> >>>>> @@ -8790,6 +8816,7 @@ static void md_clone_bio(struct mddev *mddev,
> >>>>> struct bio **bio)
> >>>>>
> >>>>> void md_account_bio(struct mddev *mddev, struct bio **bio)
> >>>>> {
> >>>>> +       bitmap_startwrite(mddev, *bio);
> >>>>>     percpu_ref_get(&mddev->active_io);
> >>>>>     md_clone_bio(mddev, bio);
> >>>>> }
> >>>>> @@ -8807,6 +8834,8 @@ void md_free_cloned_bio(struct bio *bio)
> >>>>>     if (md_io_clone->start_time)
> >>>>>             bio_end_io_acct(orig_bio, md_io_clone->start_time);
> >>>>>
> >>>>> +       bitmap_endwrite(mddev, orig_bio, md_io_clone->sectors);
> >>>>> +
> >>>>>     bio_put(bio);
> >>>>>     percpu_ref_put(&mddev->active_io);
> >>>>> }
> >>>>> diff --git a/drivers/md/md.h b/drivers/md/md.h
> >>>>> index a0d6827dced9..0c2794230e0a 100644
> >>>>> --- a/drivers/md/md.h
> >>>>> +++ b/drivers/md/md.h
> >>>>> @@ -837,6 +837,7 @@ struct md_io_clone {
> >>>>>     struct mddev    *mddev;
> >>>>>     struct bio      *orig_bio;
> >>>>>     unsigned long   start_time;
> >>>>> +       sector_t        sectors;
> >>>>>     struct bio      bio_clone;
> >>>>> };
> >>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> >>>>> index c14cf2410365..4f009e32f68a 100644
> >>>>> --- a/drivers/md/raid5.c
> >>>>> +++ b/drivers/md/raid5.c
> >>>>> @@ -3561,12 +3561,6 @@ static void __add_stripe_bio(struct stripe_head
> >>>>> *sh, struct bio *bi,
> >>>>>              * is added to a batch, STRIPE_BIT_DELAY cannot be changed
> >>>>>              * any more.
> >>>>>              */
> >>>>> -               set_bit(STRIPE_BITMAP_PENDING, &sh->state);
> >>>>> -               spin_unlock_irq(&sh->stripe_lock);
> >>>>> -               md_bitmap_startwrite(conf->mddev->bitmap, sh->sector,
> >>>>> -                                    RAID5_STRIPE_SECTORS(conf), 0);
> >>>>> -               spin_lock_irq(&sh->stripe_lock);
> >>>>> -               clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
> >>>>>             if (!sh->batch_head) {
> >>>>>                     sh->bm_seq = conf->seq_flush+1;
> >>>>>                     set_bit(STRIPE_BIT_DELAY, &sh->state);
> >>>>> @@ -3621,7 +3615,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> >>>>> stripe_head *sh,
> >>>>>     BUG_ON(sh->batch_head);
> >>>>>     for (i = disks; i--; ) {
> >>>>>             struct bio *bi;
> >>>>> -               int bitmap_end = 0;
> >>>>>
> >>>>>             if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
> >>>>>                     struct md_rdev *rdev = conf->disks[i].rdev;
> >>>>> @@ -3646,8 +3639,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> >>>>> stripe_head *sh,
> >>>>>             sh->dev[i].towrite = NULL;
> >>>>>             sh->overwrite_disks = 0;
> >>>>>             spin_unlock_irq(&sh->stripe_lock);
> >>>>> -               if (bi)
> >>>>> -                       bitmap_end = 1;
> >>>>>
> >>>>>             log_stripe_write_finished(sh);
> >>>>> @@ -3662,10 +3653,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> >>>>> stripe_head *sh,
> >>>>>                     bio_io_error(bi);
> >>>>>                     bi = nextbi;
> >>>>>             }
> >>>>> -               if (bitmap_end)
> >>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
> >>>>> -                                          RAID5_STRIPE_SECTORS(conf),
> >>>>> 0, 0);
> >>>>> -               bitmap_end = 0;
> >>>>>             /* and fail all 'written' */
> >>>>>             bi = sh->dev[i].written;
> >>>>>             sh->dev[i].written = NULL;
> >>>>> @@ -3674,7 +3661,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> >>>>> stripe_head *sh,
> >>>>>                     sh->dev[i].page = sh->dev[i].orig_page;
> >>>>>             }
> >>>>>
> >>>>> -               if (bi) bitmap_end = 1;
> >>>>>             while (bi && bi->bi_iter.bi_sector <
> >>>>>                    sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
> >>>>>                     struct bio *bi2 = r5_next_bio(conf, bi,
> >>>>> sh->dev[i].sector);
> >>>>> @@ -3708,9 +3694,6 @@ handle_failed_stripe(struct r5conf *conf, struct
> >>>>> stripe_head *sh,
> >>>>>                             bi = nextbi;
> >>>>>                     }
> >>>>>             }
> >>>>> -               if (bitmap_end)
> >>>>> -                       md_bitmap_endwrite(conf->mddev->bitmap, sh->sector,
> >>>>> -                                          RAID5_STRIPE_SECTORS(conf),
> >>>>> 0, 0);
> >>>>>             /* If we were in the middle of a write the parity block
> >>>>> might
> >>>>>              * still be locked - so just clear all R5_LOCKED flags
> >>>>>              */
> >>>>> @@ -4059,10 +4042,6 @@ static void handle_stripe_clean_event(struct
> >>>>> r5conf *conf,
> >>>>>                                     bio_endio(wbi);
> >>>>>                                     wbi = wbi2;
> >>>>>                             }
> >>>>> -                               md_bitmap_endwrite(conf->mddev->bitmap,
> >>>>> sh->sector,
> >>>>> -
> >>>>> RAID5_STRIPE_SECTORS(conf),
> >>>>> -
> >>>>> !test_bit(STRIPE_DEGRADED, &sh->state),
> >>>>> -                                                  0);
> >>>>>                             if (head_sh->batch_head) {
> >>>>>                                     sh =
> >>>>> list_first_entry(&sh->batch_list,
> >>>>>                                                           struct
> >>>>> stripe_head,
> >>>>> @@ -5788,13 +5767,6 @@ static void make_discard_request(struct mddev
> >>>>> *mddev, struct bio *bi)
> >>>>>             }
> >>>>>             spin_unlock_irq(&sh->stripe_lock);
> >>>>>             if (conf->mddev->bitmap) {
> >>>>> -                       for (d = 0;
> >>>>> -                            d < conf->raid_disks - conf->max_degraded;
> >>>>> -                            d++)
> >>>>> -                               md_bitmap_startwrite(mddev->bitmap,
> >>>>> -                                                    sh->sector,
> >>>>> -
> >>>>> RAID5_STRIPE_SECTORS(conf),
> >>>>> -                                                    0);
> >>>>>                     sh->bm_seq = conf->seq_flush + 1;
> >>>>>                     set_bit(STRIPE_BIT_DELAY, &sh->state);
> >>>>>             }
> >>>>>
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> Thanks a lot for your help!
> >>>>>> Christian
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>> Hi Kuai
> >>>>
> >>>> Maybe it's not good to put the bitmap operation from raid5 to md which
> >>>> the new api is only used for raid5. And the bitmap region which raid5
> >>>> needs to handle is based on the member disk. It should be calculated
> >>>> rather than the bio address space. Because the bio address space is
> >>>> for the whole array.
> >>>>
> >>>> We have a customer who reports a similar problem. There is a patch
> >>>> from David. I put it in the attachment.
> >>>>
> >>>> @Christian, can you have a try with the patch? It can be applied
> >>>> cleanly on 6.11-rc6
> >>>>
> >>>> Regards
> >>>> Xiao
> >>>> <md_raid5_one_bitmap_claim_per_stripe_head.patch>
> >>>
> >>> Liebe Grüße,
> >>> Christian Theune
> >>>
> >>> --
> >>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> >>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> >>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> >>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> >>>
> >>
> >> Liebe Grüße,
> >> Christian Theune
> >>
> >> --
> >> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> >> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> >> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> >> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> >>
> >
> > Liebe Grüße,
> > Christian Theune
> >
> > --
> > Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> > Flying Circus Internet Operations GmbH · https://flyingcircus.io
> > Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> > HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> >
> >
>
> Liebe Grüße,
> Christian Theune
>
> --
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-15  8:07                                                                                                         ` Xiao Ni
@ 2024-11-15  8:44                                                                                                           ` Christian Theune
  2024-11-15 10:11                                                                                                             ` Xiao Ni
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-15  8:44 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Hi,

> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
> 
> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>> 
>> Christian
> 
> Hi Christian
> 
> Beside the bitmap stuck problem, the other thread has a new problem.
> But it looks like you don't have the new problem because you already
> ran without failure for 2 days. I'll send patches against 6.13 and
> 6.11.

Great, thanks!

What do I need to do to get patches towards 6.6?

Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-15  8:44                                                                                                           ` Christian Theune
@ 2024-11-15 10:11                                                                                                             ` Xiao Ni
  2024-11-15 11:06                                                                                                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Xiao Ni @ 2024-11-15 10:11 UTC (permalink / raw)
  To: Christian Theune
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>
> Hi,
>
> > On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
> >
> > On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
> >>
> >> Hi,
> >>
> >> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
> >>
> >> Christian
> >
> > Hi Christian
> >
> > Beside the bitmap stuck problem, the other thread has a new problem.
> > But it looks like you don't have the new problem because you already
> > ran without failure for 2 days. I'll send patches against 6.13 and
> > 6.11.
>
> Great, thanks!
>
> What do I need to do to get patches towards 6.6?

Hi

This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
this patch to see if it works.

Regards
Xiao
>
> Christian
>
> --
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-15 10:11                                                                                                             ` Xiao Ni
@ 2024-11-15 11:06                                                                                                               ` Christian Theune
  2024-12-10  8:33                                                                                                                 ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-11-15 11:06 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Will do that!

> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
> 
> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>> 
>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>> 
>>>> Christian
>>> 
>>> Hi Christian
>>> 
>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>> But it looks like you don't have the new problem because you already
>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>> 6.11.
>> 
>> Great, thanks!
>> 
>> What do I need to do to get patches towards 6.6?
> 
> Hi
> 
> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
> this patch to see if it works.
> 
> Regards
> Xiao
>> 
>> Christian
>> 
>> --
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-11-15 11:06                                                                                                               ` Christian Theune
@ 2024-12-10  8:33                                                                                                                 ` Christian Theune
  2024-12-16 13:25                                                                                                                   ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-12-10  8:33 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.

> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Will do that!
> 
>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>> 
>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Hi,
>>> 
>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>> 
>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>> 
>>>>> Christian
>>>> 
>>>> Hi Christian
>>>> 
>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>> But it looks like you don't have the new problem because you already
>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>> 6.11.
>>> 
>>> Great, thanks!
>>> 
>>> What do I need to do to get patches towards 6.6?
>> 
>> Hi
>> 
>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>> this patch to see if it works.
>> 
>> Regards
>> Xiao
>>> 
>>> Christian
>>> 
>>> --
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-12-10  8:33                                                                                                                 ` Christian Theune
@ 2024-12-16 13:25                                                                                                                   ` Christian Theune
  2024-12-16 13:36                                                                                                                     ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-12-16 13:25 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, yukuai (C),
	David Jeffery

Hi,

both my servers that exhibited this issue have been running fine with 6.6.64 and the proposed patch.

@yu I’d love to get this backported, is there anything I can/need to do?

Christian

> On 10. Dec 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.
> 
>> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Will do that!
>> 
>>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>>> 
>>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>>> 
>>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>>> 
>>>>>> Christian
>>>>> 
>>>>> Hi Christian
>>>>> 
>>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>>> But it looks like you don't have the new problem because you already
>>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>>> 6.11.
>>>> 
>>>> Great, thanks!
>>>> 
>>>> What do I need to do to get patches towards 6.6?
>>> 
>>> Hi
>>> 
>>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>>> this patch to see if it works.
>>> 
>>> Regards
>>> Xiao
>>>> 
>>>> Christian
>>>> 
>>>> --
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-12-16 13:25                                                                                                                   ` Christian Theune
@ 2024-12-16 13:36                                                                                                                     ` Yu Kuai
  2024-12-16 14:18                                                                                                                       ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2024-12-16 13:36 UTC (permalink / raw)
  To: Christian Theune, Xiao Ni
  Cc: Yu Kuai, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Hi,

在 2024/12/16 21:25, Christian Theune 写道:
> Hi,
> 
> both my servers that exhibited this issue have been running fine with 6.6.64 and the proposed patch.
> 
> @yu I’d love to get this backported, is there anything I can/need to do?

Looks like you're testing the wrong patch. We'll not go with this patch
in upstream.

Do you still remember the patch set from following thread?

https://lore.kernel.org/all/5D6DF34A-81EF-47EE-B280-6A243A28011D@flyingcircus.io/

Sorry that I was busy with other things, I'll push this in the next
merge window v6.14-rc1, unless it fails your test. :)

Thanks,
Kuai

> 
> Christian
> 
>> On 10. Dec 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
>>
>> Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.
>>
>>> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
>>>
>>> Will do that!
>>>
>>>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>>>>
>>>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>>>>
>>>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>>>>
>>>>>>> Christian
>>>>>>
>>>>>> Hi Christian
>>>>>>
>>>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>>>> But it looks like you don't have the new problem because you already
>>>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>>>> 6.11.
>>>>>
>>>>> Great, thanks!
>>>>>
>>>>> What do I need to do to get patches towards 6.6?
>>>>
>>>> Hi
>>>>
>>>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>>>> this patch to see if it works.
>>>>
>>>> Regards
>>>> Xiao
>>>>>
>>>>> Christian
>>>>>
>>>>> --
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>
>>>
>>> Liebe Grüße,
>>> Christian Theune
>>>
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>
>>
>> Liebe Grüße,
>> Christian Theune
>>
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>
> 
> Liebe Grüße,
> Christian Theune
> 


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-12-16 13:36                                                                                                                     ` Yu Kuai
@ 2024-12-16 14:18                                                                                                                       ` Christian Theune
  2025-01-20  9:19                                                                                                                         ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2024-12-16 14:18 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Xiao Ni, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Hi,

oh dang, yeah, I noticed that mail and I tried grabbing the proper patch but as I previously had issues in my workflow getting them from mail I thought I picked the right one elsewhere. I’ll try again tomorrow.

Sorry … 

Christian

> On 16. Dec 2024, at 14:36, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2024/12/16 21:25, Christian Theune 写道:
>> Hi,
>> both my servers that exhibited this issue have been running fine with 6.6.64 and the proposed patch.
>> @yu I’d love to get this backported, is there anything I can/need to do?
> 
> Looks like you're testing the wrong patch. We'll not go with this patch
> in upstream.
> 
> Do you still remember the patch set from following thread?
> 
> https://lore.kernel.org/all/5D6DF34A-81EF-47EE-B280-6A243A28011D@flyingcircus.io/
> 
> Sorry that I was busy with other things, I'll push this in the next
> merge window v6.14-rc1, unless it fails your test. :)
> 
> Thanks,
> Kuai
> 
>> Christian
>>> On 10. Dec 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
>>> 
>>> Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.
>>> 
>>>> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Will do that!
>>>> 
>>>>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>>>>> 
>>>>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>>>>> 
>>>>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>>>>> 
>>>>>>>> Christian
>>>>>>> 
>>>>>>> Hi Christian
>>>>>>> 
>>>>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>>>>> But it looks like you don't have the new problem because you already
>>>>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>>>>> 6.11.
>>>>>> 
>>>>>> Great, thanks!
>>>>>> 
>>>>>> What do I need to do to get patches towards 6.6?
>>>>> 
>>>>> Hi
>>>>> 
>>>>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>>>>> this patch to see if it works.
>>>>> 
>>>>> Regards
>>>>> Xiao
>>>>>> 
>>>>>> Christian
>>>>>> 
>>>>>> --
>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>>> 
>>> Liebe Grüße,
>>> Christian Theune
>>> 
>>> -- 
>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>> 
>> Liebe Grüße,
>> Christian Theune
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2024-12-16 14:18                                                                                                                       ` Christian Theune
@ 2025-01-20  9:19                                                                                                                         ` Christian Theune
  2025-01-24  6:22                                                                                                                           ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2025-01-20  9:19 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Xiao Ni, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Sorry again for the long silence … 

6.13 applied with the patches is now under test in my setup, I’ll let you know whether this have been stable in a few days.

For future archaeologists: the previous patch (that you didn’t go with) worked mostly well but did trigger a few similar crashes.

Christian

> On 16. Dec 2024, at 15:18, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Hi,
> 
> oh dang, yeah, I noticed that mail and I tried grabbing the proper patch but as I previously had issues in my workflow getting them from mail I thought I picked the right one elsewhere. I’ll try again tomorrow.
> 
> Sorry … 
> 
> Christian
> 
>> On 16. Dec 2024, at 14:36, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>> 
>> Hi,
>> 
>> 在 2024/12/16 21:25, Christian Theune 写道:
>>> Hi,
>>> both my servers that exhibited this issue have been running fine with 6.6.64 and the proposed patch.
>>> @yu I’d love to get this backported, is there anything I can/need to do?
>> 
>> Looks like you're testing the wrong patch. We'll not go with this patch
>> in upstream.
>> 
>> Do you still remember the patch set from following thread?
>> 
>> https://lore.kernel.org/all/5D6DF34A-81EF-47EE-B280-6A243A28011D@flyingcircus.io/
>> 
>> Sorry that I was busy with other things, I'll push this in the next
>> merge window v6.14-rc1, unless it fails your test. :)
>> 
>> Thanks,
>> Kuai
>> 
>>> Christian
>>>> On 10. Dec 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
>>>> 
>>>> Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.
>>>> 
>>>>> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> Will do that!
>>>>> 
>>>>>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>>>>>> 
>>>>>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>>>>>> 
>>>>>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>>> 
>>>>>>>>> Hi,
>>>>>>>>> 
>>>>>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>>>>>> 
>>>>>>>>> Christian
>>>>>>>> 
>>>>>>>> Hi Christian
>>>>>>>> 
>>>>>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>>>>>> But it looks like you don't have the new problem because you already
>>>>>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>>>>>> 6.11.
>>>>>>> 
>>>>>>> Great, thanks!
>>>>>>> 
>>>>>>> What do I need to do to get patches towards 6.6?
>>>>>> 
>>>>>> Hi
>>>>>> 
>>>>>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>>>>>> this patch to see if it works.
>>>>>> 
>>>>>> Regards
>>>>>> Xiao
>>>>>>> 
>>>>>>> Christian
>>>>>>> 
>>>>>>> --
>>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>>> 
>>>> -- 
>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>> 
>>> Liebe Grüße,
>>> Christian Theune
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2025-01-20  9:19                                                                                                                         ` Christian Theune
@ 2025-01-24  6:22                                                                                                                           ` Christian Theune
  2025-01-24  6:35                                                                                                                             ` Yu Kuai
  0 siblings, 1 reply; 88+ messages in thread
From: Christian Theune @ 2025-01-24  6:22 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Xiao Ni, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Hi,

I’ve been running 6.13 with those patches[1] successfully with production load for about 3 days now - this looks like it’s fixed!

I’d appreciate a backport to 6.6 … is there anyone specific I should notify about that?

Thank’s a lot!
Christian

[1] Patches applied to 6.13 vanilla (don’t mind the branch name mentioning 6.11), those should be the right ones, I hope:

https://github.com/flyingcircusio/fc-nixos/blob/PL-132896-try-kernel-6.11-debug-mdraid/pkgs/patch1.patch
https://github.com/flyingcircusio/fc-nixos/blob/PL-132896-try-kernel-6.11-debug-mdraid/pkgs/patch2.patch
https://github.com/flyingcircusio/fc-nixos/blob/PL-132896-try-kernel-6.11-debug-mdraid/pkgs/patch3.patch
https://github.com/flyingcircusio/fc-nixos/blob/PL-132896-try-kernel-6.11-debug-mdraid/pkgs/patch4.patch
https://github.com/flyingcircusio/fc-nixos/blob/PL-132896-try-kernel-6.11-debug-mdraid/pkgs/patch5.patch


> On 20. Jan 2025, at 10:19, Christian Theune <ct@flyingcircus.io> wrote:
> 
> Sorry again for the long silence … 
> 
> 6.13 applied with the patches is now under test in my setup, I’ll let you know whether this have been stable in a few days.
> 
> For future archaeologists: the previous patch (that you didn’t go with) worked mostly well but did trigger a few similar crashes.
> 
> Christian
> 
>> On 16. Dec 2024, at 15:18, Christian Theune <ct@flyingcircus.io> wrote:
>> 
>> Hi,
>> 
>> oh dang, yeah, I noticed that mail and I tried grabbing the proper patch but as I previously had issues in my workflow getting them from mail I thought I picked the right one elsewhere. I’ll try again tomorrow.
>> 
>> Sorry … 
>> 
>> Christian
>> 
>>> On 16. Dec 2024, at 14:36, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>> 
>>> Hi,
>>> 
>>> 在 2024/12/16 21:25, Christian Theune 写道:
>>>> Hi,
>>>> both my servers that exhibited this issue have been running fine with 6.6.64 and the proposed patch.
>>>> @yu I’d love to get this backported, is there anything I can/need to do?
>>> 
>>> Looks like you're testing the wrong patch. We'll not go with this patch
>>> in upstream.
>>> 
>>> Do you still remember the patch set from following thread?
>>> 
>>> https://lore.kernel.org/all/5D6DF34A-81EF-47EE-B280-6A243A28011D@flyingcircus.io/
>>> 
>>> Sorry that I was busy with other things, I'll push this in the next
>>> merge window v6.14-rc1, unless it fails your test. :)
>>> 
>>> Thanks,
>>> Kuai
>>> 
>>>> Christian
>>>>> On 10. Dec 2024, at 09:33, Christian Theune <ct@flyingcircus.io> wrote:
>>>>> 
>>>>> Just a quick update: i’ve been out sick and only am getting around to start testing the patch on 6.6. it applied cleanly as you suggested and I’m waiting for the compile to finish. I’ll get back to you in the next days how it worked out.
>>>>> 
>>>>>> On 15. Nov 2024, at 12:06, Christian Theune <ct@flyingcircus.io> wrote:
>>>>>> 
>>>>>> Will do that!
>>>>>> 
>>>>>>> On 15. Nov 2024, at 11:11, Xiao Ni <xni@redhat.com> wrote:
>>>>>>> 
>>>>>>> On Fri, Nov 15, 2024 at 4:45 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>>> On 15. Nov 2024, at 09:07, Xiao Ni <xni@redhat.com> wrote:
>>>>>>>>> 
>>>>>>>>> On Thu, Nov 14, 2024 at 11:07 PM Christian Theune <ct@flyingcircus.io> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hi,
>>>>>>>>>> 
>>>>>>>>>> just a followup: the system ran over 2 days without my workload being able to trigger the issue. I’ve seen there is another thread where this patch wasn’t sufficient and if i understand correctly, Yu and Xiao are working on an amalgamated fix?
>>>>>>>>>> 
>>>>>>>>>> Christian
>>>>>>>>> 
>>>>>>>>> Hi Christian
>>>>>>>>> 
>>>>>>>>> Beside the bitmap stuck problem, the other thread has a new problem.
>>>>>>>>> But it looks like you don't have the new problem because you already
>>>>>>>>> ran without failure for 2 days. I'll send patches against 6.13 and
>>>>>>>>> 6.11.
>>>>>>>> 
>>>>>>>> Great, thanks!
>>>>>>>> 
>>>>>>>> What do I need to do to get patches towards 6.6?
>>>>>>> 
>>>>>>> Hi
>>>>>>> 
>>>>>>> This patch can apply to 6.6 cleanly. You can have a try on 6.6 with
>>>>>>> this patch to see if it works.
>>>>>>> 
>>>>>>> Regards
>>>>>>> Xiao
>>>>>>>> 
>>>>>>>> Christian
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>> 
>>>>>> 
>>>>>> Liebe Grüße,
>>>>>> Christian Theune
>>>>>> 
>>>>>> -- 
>>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>>> 
>>>>> 
>>>>> Liebe Grüße,
>>>>> Christian Theune
>>>>> 
>>>>> -- 
>>>>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>>>>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>>>>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>>>>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>>>>> 
>>>> Liebe Grüße,
>>>> Christian Theune
>>> 
>> 
>> Liebe Grüße,
>> Christian Theune
>> 
>> -- 
>> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
>> Flying Circus Internet Operations GmbH · https://flyingcircus.io
>> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
>> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
>> 
> 
> Liebe Grüße,
> Christian Theune
> 
> -- 
> Christian Theune · ct@flyingcircus.io · +49 345 219401 0
> Flying Circus Internet Operations GmbH · https://flyingcircus.io
> Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
> HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
> 

Liebe Grüße,
Christian Theune

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2025-01-24  6:22                                                                                                                           ` Christian Theune
@ 2025-01-24  6:35                                                                                                                             ` Yu Kuai
  2025-01-24  6:38                                                                                                                               ` Christian Theune
  0 siblings, 1 reply; 88+ messages in thread
From: Yu Kuai @ 2025-01-24  6:35 UTC (permalink / raw)
  To: Christian Theune, Yu Kuai
  Cc: Xiao Ni, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Hi,

在 2025/01/24 14:22, Christian Theune 写道:
> Hi,
> 
> I’ve been running 6.13 with those patches[1] successfully with production load for about 3 days now - this looks like it’s fixed!

Thanks for the test!

> 
> I’d appreciate a backport to 6.6 … is there anyone specific I should notify about that?

This is enough, I'll do that. However, Spring Festival is coming, I'll
rebase patches to 6.6 after about two weeks, if you don't mind. :)

BTW, those patches are merged in v6.14-rc1.

Thanks,
Kuai

> 
> Thank’s a lot!
> Christian


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files
  2025-01-24  6:35                                                                                                                             ` Yu Kuai
@ 2025-01-24  6:38                                                                                                                               ` Christian Theune
  0 siblings, 0 replies; 88+ messages in thread
From: Christian Theune @ 2025-01-24  6:38 UTC (permalink / raw)
  To: Yu Kuai
  Cc: Xiao Ni, John Stoffel, linux-raid@vger.kernel.org, dm-devel,
	Dragan Milivojević, yangerkun@huawei.com, David Jeffery,
	yukuai (C)

Hi,

> On 24. Jan 2025, at 07:35, Yu Kuai <yukuai1@huaweicloud.com> wrote:
> 
> Hi,
> 
> 在 2025/01/24 14:22, Christian Theune 写道:
>> Hi,
>> I’ve been running 6.13 with those patches[1] successfully with production load for about 3 days now - this looks like it’s fixed!
> 
> Thanks for the test!
> 
>> I’d appreciate a backport to 6.6 … is there anyone specific I should notify about that?
> 
> This is enough, I'll do that. However, Spring Festival is coming, I'll
> rebase patches to 6.6 after about two weeks, if you don't mind. :)

Thanks a lot - I’m patient enough and can keep the affected hosts running on my patched 6.13 for the time being. I’d appreciate a little ping when you’re ready.

Enjoy your festival!

Cheers,
Christian

-- 
Christian Theune · ct@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick


^ permalink raw reply	[flat|nested] 88+ messages in thread

end of thread, other threads:[~2025-01-24  6:39 UTC | newest]

Thread overview: 88+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-06 14:10 PROBLEM: repeatable lockup on RAID-6 with LUKS dm-crypt on NVMe devices when rsyncing many files Christian Theune
2024-08-06 14:10 ` Christian Theune
2024-08-07  2:55 ` Yu Kuai
2024-08-07  5:31   ` Christian Theune
2024-08-07  6:46     ` Christian Theune
2024-08-07  8:59       ` Christian Theune
2024-08-07 21:05         ` John Stoffel
2024-08-08  1:33           ` Yu Kuai
2024-08-08  6:02           ` Christian Theune
2024-08-08  6:55             ` Yu Kuai
2024-08-08  7:06               ` Christian Theune
2024-08-08  8:53                 ` Christian Theune
2024-08-09  1:13                   ` Yu Kuai
2024-08-09  6:10                     ` Christian Theune
2024-08-09 22:51                       ` John Stoffel
2024-08-12  6:58                         ` Christian Theune
2024-08-12 18:37                           ` John Stoffel
2024-08-14  8:53                             ` Christian Theune
2024-08-15  6:19                               ` Christian Theune
2024-08-15 10:03                                 ` Christian Theune
2024-08-15 11:14                                   ` Yu Kuai
2024-08-15 11:24                                     ` Christian Theune
2024-08-15 11:49                                       ` Yu Kuai
2024-10-22 15:02                                     ` Christian Theune
2024-10-23  1:13                                       ` Yu Kuai
2024-10-23  6:03                                         ` Christian Theune
2024-10-23 17:50                                           ` Christian Theune
2024-10-25  8:39                                         ` Christian Theune
2024-10-25 13:31                                           ` Dragan Milivojević
2024-10-25 14:02                                             ` Christian Theune
2024-10-26  5:37                                               ` Christian Theune
2024-10-26  9:07                                                 ` Yu Kuai
2024-10-26 11:51                                                   ` Christian Theune
2024-10-26 12:07                                                   ` Christian Theune
2024-10-26 12:11                                                     ` Christian Theune
2024-10-30  1:25                                                       ` Yu Kuai
2024-10-30  6:29                                                         ` Christian Theune
2024-10-31  7:48                                                           ` Yu Kuai
2024-10-31  8:04                                                             ` Christian Theune
2024-10-31 15:07                                                               ` Christian Theune
2024-10-31 19:46                                                                 ` Christian Theune
2024-10-31 20:33                                                                   ` John Stoffel
2024-11-01  2:02                                                                     ` Yu Kuai
2024-11-01  7:56                                                                       ` Christian Theune
2024-11-01  8:33                                                                         ` Christian Theune
2024-11-03 15:54                                                                           ` Christian Theune
2024-11-03 16:16                                                                             ` Dragan Milivojević
2024-11-04 11:29                                                                           ` Yu Kuai
2024-11-04 11:51                                                                             ` Christian Theune
2024-11-04 12:30                                                                               ` Yu Kuai
2024-11-04 11:40                                                                           ` Yu Kuai
2024-11-04 12:18                                                                             ` Yu Kuai
2024-11-04 14:45                                                                               ` Christian Theune
2024-11-04 20:04                                                                                 ` Christian Theune
2024-11-05  1:20                                                                                   ` Yu Kuai
2024-11-05  6:23                                                                                     ` Christian Theune
2024-11-05 10:15                                                                                       ` Christian Theune
2024-11-06  6:35                                                                                         ` Yu Kuai
2024-11-06  6:40                                                                                           ` Christian Theune
2024-11-07  7:55                                                                                             ` Yu Kuai
2024-11-07  8:01                                                                                               ` Yu Kuai
2024-11-09 11:35                                                                                               ` Xiao Ni
2024-11-11  2:25                                                                                                 ` Yu Kuai
2024-11-11  8:00                                                                                                 ` Christian Theune
2024-11-11 14:34                                                                                                   ` Christian Theune
2024-11-12  6:57                                                                                                     ` Christian Theune
2024-11-14 15:07                                                                                                       ` Christian Theune
2024-11-15  8:07                                                                                                         ` Xiao Ni
2024-11-15  8:44                                                                                                           ` Christian Theune
2024-11-15 10:11                                                                                                             ` Xiao Ni
2024-11-15 11:06                                                                                                               ` Christian Theune
2024-12-10  8:33                                                                                                                 ` Christian Theune
2024-12-16 13:25                                                                                                                   ` Christian Theune
2024-12-16 13:36                                                                                                                     ` Yu Kuai
2024-12-16 14:18                                                                                                                       ` Christian Theune
2025-01-20  9:19                                                                                                                         ` Christian Theune
2025-01-24  6:22                                                                                                                           ` Christian Theune
2025-01-24  6:35                                                                                                                             ` Yu Kuai
2025-01-24  6:38                                                                                                                               ` Christian Theune
2024-08-15 15:53                                 ` John Stoffel
2024-08-15 19:13                                   ` Christian Theune
2024-08-26 14:38                                     ` Christian Theune
2024-08-08 14:23             ` John Stoffel
2024-08-19 19:12               ` tihmstar
2024-08-19 21:05                 ` John Stoffel
2024-08-24 16:56                   ` tihmstar
2024-08-24 18:12                   ` Dragan Milivojević
2024-08-27  1:27                     ` John Stoffel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox