From: Anand Jain <anand.jain@oracle.com>
To: Josef Bacik <josef@toxicpanda.com>,
linux-btrfs@vger.kernel.org, kernel-team@fb.com
Cc: David Sterba <dsterba@suse.com>
Subject: Re: [PATCH v2 2/7] btrfs: do not take the uuid_mutex in btrfs_rm_device
Date: Thu, 2 Sep 2021 03:49:36 +0800 [thread overview]
Message-ID: <96ffa708-adfc-83c0-bd64-6af8926b372d@oracle.com> (raw)
In-Reply-To: <4c913c5a-7cde-7339-69b7-64908c7e1e7a@toxicpanda.com>
On 02/09/2021 01:10, Josef Bacik wrote:
> On 9/1/21 8:01 AM, Anand Jain wrote:
>> On 28/07/2021 05:01, Josef Bacik wrote:
>>> We got the following lockdep splat while running xfstests (specifically
>>> btrfs/003 and btrfs/020 in a row) with the new rc. This was uncovered
>>> by 87579e9b7d8d ("loop: use worker per cgroup instead of kworker") which
>>> converted loop to using workqueues, which comes with lockdep
>>> annotations that don't exist with kworkers. The lockdep splat is as
>>> follows
>>>
>>> ======================================================
>>> WARNING: possible circular locking dependency detected
>>> 5.14.0-rc2-custom+ #34 Not tainted
>>> ------------------------------------------------------
>>> losetup/156417 is trying to acquire lock:
>>> ffff9c7645b02d38 ((wq_completion)loop0){+.+.}-{0:0}, at:
>>> flush_workqueue+0x84/0x600
>>>
>>> but task is already holding lock:
>>> ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at:
>>> __loop_clr_fd+0x41/0x650 [loop]
>>>
>>> which lock already depends on the new lock.
>>>
>>> the existing dependency chain (in reverse order) is:
>>>
>>> -> #5 (&lo->lo_mutex){+.+.}-{3:3}:
>>> __mutex_lock+0xba/0x7c0
>>> lo_open+0x28/0x60 [loop]
>>> blkdev_get_whole+0x28/0xf0
>>> blkdev_get_by_dev.part.0+0x168/0x3c0
>>> blkdev_open+0xd2/0xe0
>>> do_dentry_open+0x163/0x3a0
>>> path_openat+0x74d/0xa40
>>> do_filp_open+0x9c/0x140
>>> do_sys_openat2+0xb1/0x170
>>> __x64_sys_openat+0x54/0x90
>>> do_syscall_64+0x3b/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> -> #4 (&disk->open_mutex){+.+.}-{3:3}:
>>> __mutex_lock+0xba/0x7c0
>>> blkdev_get_by_dev.part.0+0xd1/0x3c0
>>> blkdev_get_by_path+0xc0/0xd0
>>> btrfs_scan_one_device+0x52/0x1f0 [btrfs]
>>> btrfs_control_ioctl+0xac/0x170 [btrfs]
>>> __x64_sys_ioctl+0x83/0xb0
>>> do_syscall_64+0x3b/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> -> #3 (uuid_mutex){+.+.}-{3:3}:
>>> __mutex_lock+0xba/0x7c0
>>> btrfs_rm_device+0x48/0x6a0 [btrfs]
>>> btrfs_ioctl+0x2d1c/0x3110 [btrfs]
>>> __x64_sys_ioctl+0x83/0xb0
>>> do_syscall_64+0x3b/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> -> #2 (sb_writers#11){.+.+}-{0:0}:
>>> lo_write_bvec+0x112/0x290 [loop]
>>> loop_process_work+0x25f/0xcb0 [loop]
>>> process_one_work+0x28f/0x5d0
>>> worker_thread+0x55/0x3c0
>>> kthread+0x140/0x170
>>> ret_from_fork+0x22/0x30
>>>
>>> -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
>>> process_one_work+0x266/0x5d0
>>> worker_thread+0x55/0x3c0
>>> kthread+0x140/0x170
>>> ret_from_fork+0x22/0x30
>>>
>>> -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
>>> __lock_acquire+0x1130/0x1dc0
>>> lock_acquire+0xf5/0x320
>>> flush_workqueue+0xae/0x600
>>> drain_workqueue+0xa0/0x110
>>> destroy_workqueue+0x36/0x250
>>> __loop_clr_fd+0x9a/0x650 [loop]
>>> lo_ioctl+0x29d/0x780 [loop]
>>> block_ioctl+0x3f/0x50
>>> __x64_sys_ioctl+0x83/0xb0
>>> do_syscall_64+0x3b/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> other info that might help us debug this:
>>> Chain exists of:
>>> (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
>>> Possible unsafe locking scenario:
>>> CPU0 CPU1
>>> ---- ----
>>> lock(&lo->lo_mutex);
>>> lock(&disk->open_mutex);
>>> lock(&lo->lo_mutex);
>>> lock((wq_completion)loop0);
>>>
>>> *** DEADLOCK ***
>>> 1 lock held by losetup/156417:
>>> #0: ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at:
>>> __loop_clr_fd+0x41/0x650 [loop]
>>>
>>> stack backtrace:
>>> CPU: 8 PID: 156417 Comm: losetup Not tainted 5.14.0-rc2-custom+ #34
>>> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0
>>> 02/06/2015
>>> Call Trace:
>>> dump_stack_lvl+0x57/0x72
>>> check_noncircular+0x10a/0x120
>>> __lock_acquire+0x1130/0x1dc0
>>> lock_acquire+0xf5/0x320
>>> ? flush_workqueue+0x84/0x600
>>> flush_workqueue+0xae/0x600
>>> ? flush_workqueue+0x84/0x600
>>> drain_workqueue+0xa0/0x110
>>> destroy_workqueue+0x36/0x250
>>> __loop_clr_fd+0x9a/0x650 [loop]
>>> lo_ioctl+0x29d/0x780 [loop]
>>> ? __lock_acquire+0x3a0/0x1dc0
>>> ? update_dl_rq_load_avg+0x152/0x360
>>> ? lock_is_held_type+0xa5/0x120
>>> ? find_held_lock.constprop.0+0x2b/0x80
>>> block_ioctl+0x3f/0x50
>>> __x64_sys_ioctl+0x83/0xb0
>>> do_syscall_64+0x3b/0x90
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>> RIP: 0033:0x7f645884de6b
>>>
>>> Usually the uuid_mutex exists to protect the fs_devices that map
>>> together all of the devices that match a specific uuid. In rm_device
>>> we're messing with the uuid of a device, so it makes sense to protect
>>> that here.
>>>
>>> However in doing that it pulls in a whole host of lockdep dependencies,
>>> as we call mnt_may_write() on the sb before we grab the uuid_mutex, thus
>>> we end up with the dependency chain under the uuid_mutex being added
>>> under the normal sb write dependency chain, which causes problems with
>>> loop devices.
>>>
>>> We don't need the uuid mutex here however. If we call
>>> btrfs_scan_one_device() before we scratch the super block we will find
>>> the fs_devices and not find the device itself and return EBUSY because
>>> the fs_devices is open. If we call it after the scratch happens it will
>>> not appear to be a valid btrfs file system.
>>>
>>> We do not need to worry about other fs_devices modifying operations here
>>> because we're protected by the exclusive operations locking.
>>>
>>> So drop the uuid_mutex here in order to fix the lockdep splat.
>>
>>
>> I think uuid_mutex should stay. Here is why.
>>
>> While thread A takes %device at line 816 and deref at line 880.
>> Thread B can completely remove and free that %device.
>> As of now these threads are mutual exclusive using uuid_mutex.
>>
>> Thread A
>>
>> btrfs_control_ioctl()
>> mutex_lock(&uuid_mutex);
>> btrfs_scan_one_device()
>> device_list_add()
>> {
>> 815 mutex_lock(&fs_devices->device_list_mutex);
>>
>> 816 device = btrfs_find_device(fs_devices, devid,
>> 817 disk_super->dev_item.uuid, NULL);
>>
>> 880 } else if (!device->name || strcmp(device->name->str,
>> path)) {
>>
>> 933 if (device->bdev->bd_dev != path_dev) {
>>
>> 982 mutex_unlock(&fs_devices->device_list_mutex);
>> }
>>
>>
>> Thread B
>>
>> btrfs_rm_device()
>>
>> 2069 mutex_lock(&uuid_mutex); <-- proposed to remove
>>
>> 2150 mutex_lock(&fs_devices->device_list_mutex);
2151 list_del_rcu(&device->dev_list); <----
>>
>> 2172 mutex_unlock(&fs_devices->device_list_mutex);
>>
>> 2180 btrfs_scratch_superblocks(fs_info, device->bdev,
>> 2181 device->name->str);
>>
>> 2183 btrfs_close_bdev(device);
>> 2184 synchronize_rcu();
>> 2185 btrfs_free_device(device);
>>
>> 2194 mutex_unlock(&uuid_mutex); <-- proposed to remove
>>
>>
>
> This is fine, we're protected by the fs_devices->device_list_mutex here.
> We'll remove our device from the list before dropping the
> device_list_mutex,
You are right. I missed that point.
Changes looks good to me.
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Thanks.
> so we won't be able to find the old device if we're
> removing it. Thanks,
>
> Josef
next prev parent reply other threads:[~2021-09-01 19:49 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-27 21:01 [PATCH v2 0/7] Josef Bacik
2021-07-27 21:01 ` [PATCH v2 1/7] btrfs: do not call close_fs_devices in btrfs_rm_device Josef Bacik
2021-09-01 8:13 ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 2/7] btrfs: do not take the uuid_mutex " Josef Bacik
2021-09-01 12:01 ` Anand Jain
2021-09-01 17:08 ` David Sterba
2021-09-01 17:10 ` Josef Bacik
2021-09-01 19:49 ` Anand Jain [this message]
2021-09-02 12:58 ` David Sterba
2021-09-02 14:10 ` Josef Bacik
2021-09-17 14:33 ` David Sterba
2021-09-20 7:45 ` Anand Jain
2021-09-20 8:26 ` David Sterba
2021-09-20 9:41 ` Anand Jain
2021-09-23 4:33 ` Anand Jain
2021-09-21 11:59 ` Filipe Manana
2021-09-21 12:17 ` Filipe Manana
2021-09-22 15:33 ` Filipe Manana
2021-09-23 4:15 ` Anand Jain
2021-09-23 3:58 ` [PATCH] btrfs: drop lockdep assert in close_fs_devices() Anand Jain
2021-09-23 4:04 ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 3/7] btrfs: do not read super look for a device path Josef Bacik
2021-08-25 2:00 ` Anand Jain
2021-09-27 15:32 ` Josef Bacik
2021-09-28 11:50 ` Anand Jain
2021-07-27 21:01 ` [PATCH v2 4/7] btrfs: update the bdev time directly when closing Josef Bacik
2021-08-25 0:35 ` Anand Jain
2021-09-02 12:16 ` David Sterba
2021-07-27 21:01 ` [PATCH v2 5/7] btrfs: delay blkdev_put until after the device remove Josef Bacik
2021-08-25 1:00 ` Anand Jain
2021-09-02 12:16 ` David Sterba
2021-07-27 21:01 ` [PATCH v2 6/7] btrfs: unify common code for the v1 and v2 versions of " Josef Bacik
2021-08-25 1:19 ` Anand Jain
2021-09-01 14:05 ` Nikolay Borisov
2021-07-27 21:01 ` [PATCH v2 7/7] btrfs: do not take the device_list_mutex in clone_fs_devices Josef Bacik
2021-08-24 22:08 ` Anand Jain
2021-09-01 13:35 ` Nikolay Borisov
2021-09-02 12:59 ` David Sterba
2021-09-17 15:06 ` [PATCH v2 0/7] David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=96ffa708-adfc-83c0-bd64-6af8926b372d@oracle.com \
--to=anand.jain@oracle.com \
--cc=dsterba@suse.com \
--cc=josef@toxicpanda.com \
--cc=kernel-team@fb.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).