From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6B1FC46467 for ; Fri, 20 Jan 2023 03:27:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F11106B0074; Thu, 19 Jan 2023 22:27:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC1816B0075; Thu, 19 Jan 2023 22:27:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3CA86B0078; Thu, 19 Jan 2023 22:27:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C38146B0074 for ; Thu, 19 Jan 2023 22:27:12 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5ADC91606E5 for ; Fri, 20 Jan 2023 03:27:12 +0000 (UTC) X-FDA: 80373741504.01.EA6DC83 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf24.hostedemail.com (Postfix) with ESMTP id 378FB18000C for ; Fri, 20 Jan 2023 03:27:10 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=D95ICNhl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674185230; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JeWp3AI1idtCEb/tlL53bEeivC6lDxwyV5blCIKy/sI=; b=7sNqdvG4QdXyx4z8rkTSU9Hj067zyLNRKj+lvU82ZBX5t4eVT2RekXysB2ssHAtFxIt+4D LnuqUYtSvAWy9nyQt3dxveBs3JNkPPRz8u7AwH3t/EKAUNvfoVuHQG83dDXdzRGV+0om/d XGyJVIZrDhe70lF3fNkjxRpzD6pXojY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=D95ICNhl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of boqun.feng@gmail.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=boqun.feng@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674185230; a=rsa-sha256; cv=none; b=qcgIb8RIbjgvC6bLRSvaU/LamEdVklWK4Ye8x2jRn8/SYrEbwgwlBdSVfwg6tIC1vU4Jf3 X2tmRUcfRxrGInX3P1E5p3HaVgFwYR9WpfxEsNznHVZPBqCogMWH1WXuTVtFB3q4RmZgYa 7yTyowvUxcZVTznm9ZLpnYYUHjHNjjI= Received: by mail-qt1-f181.google.com with SMTP id q15so3317489qtn.0 for ; Thu, 19 Jan 2023 19:27:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:from:to:cc:subject:date :message-id:reply-to; bh=JeWp3AI1idtCEb/tlL53bEeivC6lDxwyV5blCIKy/sI=; b=D95ICNhlrG/KdBynQggCzgLLAuHuGk8Mbpn0AL6BtzmVJuwt9WLmxuMoNSN3Y+XCMO A0aGLaI2i9Pb7TYV++tON7pfjMGVz6QtncDV6ZIxUJ7qAbl6riZ/0bLOUAHlynHjFGsz jgZVEcHtW3eKzavFDaTW+lcKQBpQIJV9juezWz9TH3ad/TrZxMpCU34vumJYYHMfRc1Y 501UIWK6arIlIiH+yxvspE8eXr88BU8njgy1Qj1Xu6KYAiaysbSWJuyXE94KWodqZ+z9 ClwOUM+ZlBc5IEY4fdogkJMblmn1q3LV1t8V45ul6R/ubPabPfsWrmHeu2wAKLk6BH4g OYMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:feedback-id:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JeWp3AI1idtCEb/tlL53bEeivC6lDxwyV5blCIKy/sI=; b=YpYVFVzzpwPhLP64eL1X7nXTD9dnQLxWxrnVbNwNjbKftH+O1G3WDHf7irRdel17XI IoFR7Zg5q409R+hxvOX/aMDsRurZKBKgkBLNuqgOCbaX051hRHGPG5EenEidgXUN16mw SrD6teRohfhzMQkf5rDk34emgki7m03p8LneGk8xZZDD+mTSEUokSuacLfuv91tWy1NT fPfzGRSRCXF3sRe6+Xgy8mw2x1vZR6k8Wz0CiaalpE9IqD8+C+ds5tRFjaQn3ZRv5riB sSz+uqIrTSOAmyBg1/TkasIsSvLetFdBhbOFi/Q6EQWGuTGyBbXTKkwVm/VkZFaZIIbk 1lNQ== X-Gm-Message-State: AFqh2koSVlcVei1qVShNbhcaTgt7UV0qTaMtlCO4KmwryMFFNctUMt2j ofYE2JJfNb8GpY+svI5aa1c= X-Google-Smtp-Source: AMrXdXvr2I0BIFLf/wAAWymayxoH8tuEvbOB9HV0zJpUkY8T4F9DRa6OwPdvwCrdiU/smlo0a+xZvA== X-Received: by 2002:ac8:1411:0:b0:3b6:3a8f:ecbc with SMTP id k17-20020ac81411000000b003b63a8fecbcmr18062765qtj.66.1674185229402; Thu, 19 Jan 2023 19:27:09 -0800 (PST) Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id cb15-20020a05622a1f8f00b003ab43dabfb1sm7071900qtb.55.2023.01.19.19.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 19:27:09 -0800 (PST) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 2B70F27C0054; Thu, 19 Jan 2023 22:27:07 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 19 Jan 2023 22:27:08 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrudduuddgiedtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeehudfgudffffetuedtvdehueevledvhfelleeivedtgeeuhfegueeviedu ffeivdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 19 Jan 2023 22:27:04 -0500 (EST) Date: Thu, 19 Jan 2023 19:26:39 -0800 From: Boqun Feng To: Byungchul Park Cc: linux-kernel@vger.kernel.org, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, longman@redhat.com Subject: Re: [PATCH RFC v7 00/23] DEPT(Dependency Tracker) Message-ID: References: <1674179505-26987-1-git-send-email-byungchul.park@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 378FB18000C X-Stat-Signature: 5oh8npicq8gi8f4dpccnoimiokrki4j8 X-HE-Tag: 1674185230-985038 X-HE-Meta: U2FsdGVkX1+B32VfPgtCbYkU7R26xsg1lo8U9KaTJhchC91L35iiQ7PH7T6Fpi1Y7Mmh+xxLsLKhoPOQZJEguAENLHNL5re3fAOxIpjXXQgFd+ipKmN5LXwKgxxT243HO+XJ836Nt7oj5FmN40d+1pxke6LzrxNGiUlq7ULZzVrXrBsE666t/fi4Vucb4ND0QO5EnbK5XN1gMj7dzRydD0BIU4fmCHl7hTlwYtHLg6Wf2esh1SKCqyqxnttAccO9b4WzD6D62LioBv5Js1a1eNleQ1aGsNZ8s/mUVwf3IzuApVyEruLScsnLCFUtrNuy+zo2xFWgYuNnnc7NI2eBeHSws6iSSu7NkVR7o6Fc6ssp3S+VRzXrWkcdvP/8rNBO6MVvYoa81l16M55CItEacT7NDA+4dZkSe143SCa1eQH91DFjbffNmCqV8dTAzf7rHVhre2zO1Sv7geWziJTSs4ZHFK6EFeEJC3S6WB1wcUXtVSmGJgs7e7bJf44RhavCNWIxHn89jhPVgMvu/zuNig5+iHDG8JQmGQm6Cl5ZRMPnlP6jeyXk4c9mop1bt+GePcasGbXdxK45Cfjn182/+hV3emjy6IzrQ4WInl6WGAqzvtLPxjRN4RxHEA9r8COgTC3QO8jdPu7FXl1fGR46/E6CKY6aDon1Ehk2Cvq+bHKLUOvw0jE6v/CPNf00HqmkqOmsyWnMdse0HMTgWcrzViz43yP1ter7GUhZkt5DMe5OM868OHfnVwvsvUdvCj7y7KKRWaGIiE9pdhMy+QQpBaTxHzzJj8J50LBAzEXc2X+L5+BMtqQDHHyH+DEcZ2gMYZ7nfEBvNDK7gMBPOCH3GE8bQsktsVAcNtMPQjSYisreIL1pSzvZ2lVotw/AcSAaNR8HyplI5cY8Xu510e/nR9E8XWcCKYsWDqwqND7TOj5GkaVCvSxFQ4jS0UVBI8BPFDvVmDjBaHqiunDQuGL yp7+Orxd ZjWgwJMozmGoOwhSKa/b589H7PTQIjNW38a8MQ+dvtlqQWXBVQzgli2CibBKq5xX838O0V1eySSExAyckX6UtteneieSxvpR9oq9YY1KqqfZn2WFF8ZQo25GiIIuDN4RuUdYmCudjg5ijDeig4D8iM6cTpdDQBAgPTklIza/f4vxT5lizu1lRt88vgIVWQlSdX3sM3JXBWZyeE8OTdT8c1PDjC5AAh/0GcPXsmtxs6ADNbtzMYbItzLDUvUPWdjePQoNqyF081vOLMU+UmKNSTFcNAOlBDBMJlaZaUsq+a/TkpxI79xgdlZwqe0YgZHOQbWvyE2w9u1FDVaqQg7tLJBnQ+8x1f0HJvd5BJxlNnsekFLsJ6hcnQ/YebGEPxGSyzfQIDdSaEwQKByje4pW9j00DvaM6Dbb5rEZSC+6wDU8ougmmUzrraHQKLvqiLNfL+mE8dwXBA0NJwiITZ73QNwKKBE+Tdz0pJHyYmS+wo8duKhtI+vtLvFSojyZ36kCtTg6eJwblDTphxIGPr7D0lYvuig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 19, 2023 at 07:07:59PM -0800, Boqun Feng wrote: > On Thu, Jan 19, 2023 at 06:23:49PM -0800, Boqun Feng wrote: > > On Fri, Jan 20, 2023 at 10:51:45AM +0900, Byungchul Park wrote: > > > Boqun wrote: > > > > On Thu, Jan 19, 2023 at 01:33:58PM +0000, Matthew Wilcox wrote: > > > > > On Thu, Jan 19, 2023 at 03:23:08PM +0900, Byungchul Park wrote: > > > > > > Boqun wrote: > > > > > > > *Looks like the DEPT dependency graph doesn't handle the > > > > > > > fair/unfair readers as lockdep current does. Which bring the > > > > > > > next question. > > > > > > > > > > > > No. DEPT works better for unfair read. It works based on wait/event. So > > > > > > read_lock() is considered a potential wait waiting on write_unlock() > > > > > > while write_lock() is considered a potential wait waiting on either > > > > > > write_unlock() or read_unlock(). DEPT is working perfect for it. > > > > > > > > > > > > For fair read (maybe you meant queued read lock), I think the case > > > > > > should be handled in the same way as normal lock. I might get it wrong. > > > > > > Please let me know if I miss something. > > > > > > > > > > From the lockdep/DEPT point of view, the question is whether: > > > > > > > > > > read_lock(A) > > > > > read_lock(A) > > > > > > > > > > can deadlock if a writer comes in between the two acquisitions and > > > > > sleeps waiting on A to be released. A fair lock will block new > > > > > readers when a writer is waiting, while an unfair lock will allow > > > > > new readers even while a writer is waiting. > > > > > > > > > > > > > To be more accurate, a fair reader will wait if there is a writer > > > > waiting for other reader (fair or not) to unlock, and an unfair reader > > > > won't. > > > > > > What a kind guys, both of you! Thanks. > > > > > > I asked to check if there are other subtle things than this. Fortunately, > > > I already understand what you guys shared. > > > > > > > In kernel there are read/write locks that can have both fair and unfair > > > > readers (e.g. queued rwlock). Regarding deadlocks, > > > > > > > > T0 T1 T2 > > > > -- -- -- > > > > fair_read_lock(A); > > > > write_lock(B); > > > > write_lock(A); > > > > write_lock(B); > > > > unfair_read_lock(A); > > > > > > With the DEPT's point of view (let me re-write the scenario): > > > > > > T0 T1 T2 > > > -- -- -- > > > fair_read_lock(A); > > > write_lock(B); > > > write_lock(A); > > > write_lock(B); > > > unfair_read_lock(A); > > > write_unlock(B); > > > read_unlock(A); > > > read_unlock(A); > > > write_unlock(B); > > > write_unlock(A); > > > > > > T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner > > > not doing either write_unlock(B) or read_unlock(B). In other words: > > > > > > 1. read_unlock(A) happening depends on write_unlock(B) happening. > > > 2. read_unlock(A) happening depends on read_unlock(B) happening. > > > > > > T1: write_unlock(B) cannot happen if unfair_read_lock(A) is stuck by a A > > > owner not doing write_unlock(A). In other words: > > > > > > 3. write_unlock(B) happening depends on write_unlock(A) happening. > > > > > > 1, 2 and 3 give the following dependencies: > > > > > > 1. read_unlock(A) -> write_unlock(B) > > > 2. read_unlock(A) -> read_unlock(B) > > > 3. write_unlock(B) -> write_unlock(A) > > > > > > There's no circular dependency so it's safe. DEPT doesn't report this. > > > > > > > the above is not a deadlock, since T1's unfair reader can "steal" the > > > > lock. However the following is a deadlock: > > > > > > > > T0 T1 T2 > > > > -- -- -- > > > > unfair_read_lock(A); > > > > write_lock(B); > > > > write_lock(A); > > > > write_lock(B); > > > > fair_read_lock(A); > > > > > > > > , since T'1 fair reader will wait. > > > > > > With the DEPT's point of view (let me re-write the scenario): > > > > > > T0 T1 T2 > > > -- -- -- > > > unfair_read_lock(A); > > > write_lock(B); > > > write_lock(A); > > > write_lock(B); > > > fair_read_lock(A); > > > write_unlock(B); > > > read_unlock(A); > > > read_unlock(A); > > > write_unlock(B); > > > write_unlock(A); > > > > > > T0: read_unlock(A) cannot happen if write_lock(B) is stuck by a B owner > > > not doing either write_unlock(B) or read_unlock(B). In other words: > > > > > > 1. read_unlock(A) happening depends on write_unlock(B) happening. > > > 2. read_unlock(A) happening depends on read_unlock(B) happening. > > > > > > T1: write_unlock(B) cannot happen if fair_read_lock(A) is stuck by a A > > > owner not doing either write_unlock(A) or read_unlock(A). In other > > > words: > > > > > > 3. write_unlock(B) happening depends on write_unlock(A) happening. > > > 4. write_unlock(B) happening depends on read_unlock(A) happening. > > > > > > 1, 2, 3 and 4 give the following dependencies: > > > > > > 1. read_unlock(A) -> write_unlock(B) > > > 2. read_unlock(A) -> read_unlock(B) > > > 3. write_unlock(B) -> write_unlock(A) > > > 4. write_unlock(B) -> read_unlock(A) > > > > > > With 1 and 4, there's a circular dependency so DEPT definitely report > > > this as a problem. > > > > > > REMIND: DEPT focuses on waits and events. > > > > Do you have the test cases showing DEPT can detect this? > > > > Just tried the following on your latest GitHub branch, I commented all > but one deadlock case. Lockdep CAN detect it but DEPT CANNOT detect it. > Feel free to double check. > In case anyone else want to try, let me explain a little bit how to verify the behavior of the detectors. With the change, the only test that runs is dotest(queued_read_lock_hardirq_RE_Er, FAILURE, LOCKTYPE_RWLOCK); "FAILURE" indicates selftests think lockdep should report a deadlock, therefore for lockdep if all goes well, you will see: [...] hardirq read-lock/lock-read: ok | If you expect lockdep to print a full splat in the test (lockdep is silent by default), you can add "debug_locks_verbose=2" in the kernel command line, "2" mean RWLOCK testsuite. Regards, Boqun > Regards, > Boqun