From: Dongsheng Yang <dongsheng.yang@easystack.cn>
To: Ilya Dryomov <idryomov@gmail.com>
Cc: ceph-devel@vger.kernel.org
Subject: Re: [PATCH] rbd: prevent busy loop when requesting exclusive lock
Date: Wed, 2 Aug 2023 14:52:20 +0800 [thread overview]
Message-ID: <256d4476-1fb6-5129-2e52-7a09f194a9b4@easystack.cn> (raw)
In-Reply-To: <CAOi1vP9s_4j8QLBYRyCTi3XCKdPagtZusM=S+z7BJDbHAVye_Q@mail.gmail.com>
在 2023/8/2 星期三 下午 2:41, Ilya Dryomov 写道:
> On Wed, Aug 2, 2023 at 8:36 AM Dongsheng Yang
> <dongsheng.yang@easystack.cn> wrote:
>>
>> Hi Ilya
>>
>> 在 2023/8/2 星期三 上午 6:22, Ilya Dryomov 写道:
>>> Due to rbd_try_acquire_lock() effectively swallowing all but
>>> EBLOCKLISTED error from rbd_try_lock() ("request lock anyway") and
>>> rbd_request_lock() returning ETIMEDOUT error not only for an actual
>>> notify timeout but also when the lock owner doesn't respond, a busy
>>> loop inside of rbd_acquire_lock() between rbd_try_acquire_lock() and
>>> rbd_request_lock() is possible.
>>>
>>> Requesting the lock on EBUSY error (returned by get_lock_owner_info()
>>> if an incompatible lock or invalid lock owner is detected) makes very
>>> little sense. The same goes for ETIMEDOUT error (might pop up pretty
>>> much anywhere if osd_request_timeout option is set) and many others.
>>>
>>> Just fail I/O requests on rbd_dev->acquiring_list immediately on any
>>> error from rbd_try_lock().
>>>
>>> Cc: stable@vger.kernel.org # 588159009d5b: rbd: retrieve and check lock owner twice before blocklisting
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
>>> ---
>>> drivers/block/rbd.c | 28 +++++++++++++++-------------
>>> 1 file changed, 15 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
>>> index 24afcc93ac01..2328cc05be36 100644
>>> --- a/drivers/block/rbd.c
>>> +++ b/drivers/block/rbd.c
>>> @@ -3675,7 +3675,7 @@ static int rbd_lock(struct rbd_device *rbd_dev)
>>> ret = ceph_cls_lock(osdc, &rbd_dev->header_oid, &rbd_dev->header_oloc,
>>> RBD_LOCK_NAME, CEPH_CLS_LOCK_EXCLUSIVE, cookie,
>>> RBD_LOCK_TAG, "", 0);
>>> - if (ret)
>>> + if (ret && ret != -EEXIST)
>>> return ret;
>>>
>>> __rbd_lock(rbd_dev, cookie);
>>
>> If we got -EEXIST here, we will call __rbd_lock() and return 0. -EEXIST
>> means lock is held by myself, is that necessary to call __rbd_lock()?
>
> Hi Dongsheng,
>
> Yes, because the reason rbd_lock() gets called in the first place is
> that the kernel client doesn't "know" that it's still holding the lock
> in RADOS. This can happen if the unlock operation times out, for
> example.
>
> Notice
>
> WARN_ON(__rbd_is_lock_owner(rbd_dev) ||
> rbd_dev->lock_cookie[0] != '\0');
>
> at the top of rbd_lock().
Okey, then
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
>
> Thanks,
>
> Ilya
> .
>
prev parent reply other threads:[~2023-08-02 6:53 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-01 22:22 [PATCH] rbd: prevent busy loop when requesting exclusive lock Ilya Dryomov
2023-08-02 6:35 ` Dongsheng Yang
2023-08-02 6:41 ` Ilya Dryomov
2023-08-02 6:52 ` Dongsheng Yang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=256d4476-1fb6-5129-2e52-7a09f194a9b4@easystack.cn \
--to=dongsheng.yang@easystack.cn \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox