Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 4/8] btrfs: remove the recursion handling code in locking.c
Date: Wed, 11 Nov 2020 15:29:30 +0100	[thread overview]
Message-ID: <20201111142930.GP6756@twin.jikos.cz> (raw)
In-Reply-To: <c04e7bd2e5294b23eadbcafedca7214f7894c9e9.1604697895.git.josef@toxicpanda.com>

On Fri, Nov 06, 2020 at 04:27:32PM -0500, Josef Bacik wrote:
> @@ -71,31 +47,7 @@ void __btrfs_tree_read_lock(struct extent_buffer *eb, enum btrfs_lock_nesting ne
>  	if (trace_btrfs_tree_read_lock_enabled())
>  		start_ns = ktime_get_ns();
>  
> -	if (unlikely(recurse)) {
> -		/* First see if we can grab the lock outright */
> -		if (down_read_trylock(&eb->lock))
> -			goto out;
> -
> -		/*
> -		 * Ok still doesn't necessarily mean we are already holding the
> -		 * lock, check the owner.
> -		 */
> -		if (eb->lock_owner != current->pid) {

This

> -			down_read_nested(&eb->lock, nest);
> -			goto out;
> -		}
> -
> -		/*
> -		 * Ok we have actually recursed, but we should only be recursing
> -		 * once, so blow up if we're already recursed, otherwise set
> -		 * ->lock_recursed and carry on.
> -		 */
> -		BUG_ON(eb->lock_recursed);
> -		eb->lock_recursed = true;
> -		goto out;
> -	}
>  	down_read_nested(&eb->lock, nest);
> -out:
>  	eb->lock_owner = current->pid;
>  	trace_btrfs_tree_read_lock(eb, start_ns);
>  }
> @@ -136,22 +88,11 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb)
>  }
>  
>  /*
> - * Release read lock.  If the read lock was recursed then the lock stays in the
> - * original state that it was before it was recursively locked.
> + * Release read lock.
>   */
>  void btrfs_tree_read_unlock(struct extent_buffer *eb)
>  {
>  	trace_btrfs_tree_read_unlock(eb);
> -	/*
> -	 * if we're nested, we have the write lock.  No new locking
> -	 * is needed as long as we are the lock owner.
> -	 * The write unlock will do a barrier for us, and the lock_recursed
> -	 * field only matters to the lock owner.
> -	 */
> -	if (eb->lock_recursed && current->pid == eb->lock_owner) {

And this were the last uses of the lock_owner inside locks, so when the
recursion is gone, the remainig use:

btrfs_init_new_buffer:

4624         /*
4625          * Extra safety check in case the extent tree is corrupted and extent
4626          * allocator chooses to use a tree block which is already used and
4627          * locked.
4628          */
4629         if (buf->lock_owner == current->pid) {
4630                 btrfs_err_rl(fs_info,
4631 "tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
4632                         buf->start, btrfs_header_owner(buf), current->pid);
4633                 free_extent_buffer(buf);
4634                 return ERR_PTR(-EUCLEAN);
4635         }

And

185
186 /*
187  * Helper to output refs and locking status of extent buffer.  Useful to debug
188  * race condition related problems.
189  */
190 static void print_eb_refs_lock(struct extent_buffer *eb)
191 {
192 #ifdef CONFIG_BTRFS_DEBUG
193         btrfs_info(eb->fs_info, "refs %u lock_owner %u current %u",
194                    atomic_read(&eb->refs), eb->lock_owner, current->pid);
195 #endif
196 }

The safety check added in b72c3aba09a53fc7c18 ("btrfs: locking: Add
extra check in btrfs_init_new_buffer() to avoid deadlock") and it seems
to be useful but I think it builds on the assumptions of the previous
tree locks. The mentioned warning uses the recursive locking which is
being removed.

For debugging we could keep the lock_owner in eb, but under the
CONFIG_BTRFS_DEBUG, so for the release build the eb size is reduced.

  parent reply	other threads:[~2020-11-11 14:31 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-06 21:27 [PATCH 0/8] Locking cleanups and lockdep fix Josef Bacik
2020-11-06 21:27 ` [PATCH 1/8] btrfs: cleanup the locking in btrfs_next_old_leaf Josef Bacik
2020-11-09 10:06   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 2/8] btrfs: unlock to current level " Josef Bacik
2020-11-09 10:12   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 3/8] btrfs: kill path->recurse Josef Bacik
2020-11-09 10:19   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 4/8] btrfs: remove the recursion handling code in locking.c Josef Bacik
2020-11-09 10:20   ` Filipe Manana
2020-11-11 14:14   ` David Sterba
2020-11-11 14:29   ` David Sterba [this message]
2020-11-11 14:43     ` Josef Bacik
2020-11-11 14:59       ` David Sterba
2020-11-06 21:27 ` [PATCH 5/8] btrfs: remove __btrfs_read_lock_root_node Josef Bacik
2020-11-09 10:20   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 6/8] btrfs: use btrfs_tree_read_lock in btrfs_search_slot Josef Bacik
2020-11-09 10:21   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 7/8] btrfs: remove the recurse parameter from __btrfs_tree_read_lock Josef Bacik
2020-11-09 10:22   ` Filipe Manana
2020-11-06 21:27 ` [PATCH 8/8] btrfs: remove ->recursed from extent_buffer Josef Bacik
2020-11-09 10:23   ` Filipe Manana
2020-11-12 18:18 ` [PATCH 0/8] Locking cleanups and lockdep fix David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201111142930.GP6756@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=josef@toxicpanda.com \
    --cc=kernel-team@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox