linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nikolay Borisov <nborisov@suse.com>
To: linux-btrfs@vger.kernel.org
Cc: Nikolay Borisov <nborisov@suse.com>
Subject: [PATCH v3 2/6] btrfs: Remove fs_info from struct async_chunk
Date: Thu, 21 Feb 2019 13:57:13 +0200	[thread overview]
Message-ID: <20190221115717.5128-3-nborisov@suse.com> (raw)
In-Reply-To: <20190221115717.5128-1-nborisov@suse.com>

The associated btrfs_work already contains a reference to the fs_info so
use that instead of passing it via async_chunk. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/inode.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0f82ea348164..d61dd538d2b4 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -368,7 +368,6 @@ struct async_extent {
 
 struct async_chunk {
 	struct inode *inode;
-	struct btrfs_fs_info *fs_info;
 	struct page *locked_page;
 	u64 start;
 	u64 end;
@@ -1156,13 +1155,11 @@ static noinline void async_cow_start(struct btrfs_work *work)
  */
 static noinline void async_cow_submit(struct btrfs_work *work)
 {
-	struct btrfs_fs_info *fs_info;
-	struct async_chunk *async_cow;
+	struct async_chunk *async_cow = container_of(work, struct async_chunk,
+						     work);
+	struct btrfs_fs_info *fs_info = btrfs_work_owner(work);
 	unsigned long nr_pages;
 
-	async_cow = container_of(work, struct async_chunk, work);
-
-	fs_info = async_cow->fs_info;
 	nr_pages = (async_cow->end - async_cow->start + PAGE_SIZE) >>
 		PAGE_SHIFT;
 
@@ -1249,7 +1246,6 @@ static int cow_file_range_async(struct inode *inode, struct page *locked_page,
 		async_cow[i].inode = inode;
 		async_cow[i].start = start;
 		async_cow[i].end = cur_end;
-		async_cow[i].fs_info = fs_info;
 		async_cow[i].locked_page = locked_page;
 		async_cow[i].write_flags = write_flags;
 		INIT_LIST_HEAD(&async_cow[i].extents);
-- 
2.17.1


  parent reply	other threads:[~2019-02-21 11:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-21 11:57 [PATCH v3 0/6] Compress path cleanups Nikolay Borisov
2019-02-21 11:57 ` [PATCH v3 1/6] btrfs: Refactor cow_file_range_async Nikolay Borisov
2019-02-21 13:15   ` Johannes Thumshirn
2019-02-21 13:25     ` Nikolay Borisov
2019-02-21 15:07       ` Johannes Thumshirn
2019-02-21 15:09         ` Nikolay Borisov
2019-02-22 18:05           ` David Sterba
2019-02-22 18:13   ` David Sterba
2019-02-21 11:57 ` Nikolay Borisov [this message]
2019-02-21 13:07   ` [PATCH v3 2/6] btrfs: Remove fs_info from struct async_chunk Johannes Thumshirn
2019-02-21 11:57 ` [PATCH v3 3/6] btrfs: Make compress_file_range take only " Nikolay Borisov
2019-02-21 13:07   ` Johannes Thumshirn
2019-02-21 11:57 ` [PATCH v3 4/6] btrfs: Replace clear_extent_bit with unlock_extent Nikolay Borisov
2019-02-21 11:57 ` [PATCH v3 5/6] btrfs: Set iotree only once in submit_compressed_extents Nikolay Borisov
2019-02-21 11:57 ` [PATCH v3 6/6] btrfs: Factor out common extent locking code " Nikolay Borisov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190221115717.5128-3-nborisov@suse.com \
    --to=nborisov@suse.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).