From: Jaegeuk Kim <jaegeuk@kernel.org>
To: Daeho Jeong <daeho43@gmail.com>
Cc: Eric Biggers <ebiggers@kernel.org>,
	linux-f2fs-devel@lists.sourceforge.net, kernel-team@android.com,
	Daeho Jeong <daehojeong@google.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [f2fs-dev] [PATCH] f2fs: fix race of pending_pages in decompression
Date: Fri, 4 Dec 2020 10:29:22 -0800	[thread overview]
Message-ID: <X8qAAiRah/zmowZB@google.com> (raw)
In-Reply-To: <CACOAw_wfew8xER-CibUtddRKVtcr3k_iGzjQ-bVYxqRUuEVCcw@mail.gmail.com>
On 12/04, Daeho Jeong wrote:
> Thanks for the explanation about verity.
> I got your point. Thanks~
Possible fix can be like this?
---
 fs/f2fs/compress.c |  2 --
 fs/f2fs/data.c     | 19 +++++++++++++------
 2 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 89f73a7c8667..c5fee4d7ea72 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1491,8 +1491,6 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc)
 	dic->magic = F2FS_COMPRESSED_PAGE_MAGIC;
 	dic->inode = cc->inode;
 	atomic_set(&dic->pending_pages, cc->nr_cpages);
-	if (fsverity_active(cc->inode))
-		atomic_set(&dic->verity_pages, cc->nr_cpages);
 	dic->cluster_idx = cc->cluster_idx;
 	dic->cluster_size = cc->cluster_size;
 	dic->log_cluster_size = cc->log_cluster_size;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index e3168f32f943..657fb562d7d4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1035,7 +1035,8 @@ static inline bool f2fs_need_verity(const struct inode *inode, pgoff_t idx)
 
 static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 				      unsigned nr_pages, unsigned op_flag,
-				      pgoff_t first_idx, bool for_write)
+				      pgoff_t first_idx, bool for_write,
+				      bool for_verity)
 {
 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
 	struct bio *bio;
@@ -1057,7 +1058,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 		post_read_steps |= 1 << STEP_DECRYPT;
 	if (f2fs_compressed_file(inode))
 		post_read_steps |= 1 << STEP_DECOMPRESS_NOWQ;
-	if (f2fs_need_verity(inode, first_idx))
+	if (for_verity && f2fs_need_verity(inode, first_idx))
 		post_read_steps |= 1 << STEP_VERITY;
 
 	if (post_read_steps) {
@@ -1087,7 +1088,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
 	struct bio *bio;
 
 	bio = f2fs_grab_read_bio(inode, blkaddr, 1, op_flags,
-					page->index, for_write);
+					page->index, for_write, true);
 	if (IS_ERR(bio))
 		return PTR_ERR(bio);
 
@@ -2141,7 +2142,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
 	if (bio == NULL) {
 		bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
 				is_readahead ? REQ_RAHEAD : 0, page->index,
-				false);
+				false, true);
 		if (IS_ERR(bio)) {
 			ret = PTR_ERR(bio);
 			bio = NULL;
@@ -2188,6 +2189,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 	const unsigned blkbits = inode->i_blkbits;
 	const unsigned blocksize = 1 << blkbits;
 	struct decompress_io_ctx *dic = NULL;
+	bool for_verity = false;
 	int i;
 	int ret = 0;
 
@@ -2253,6 +2255,11 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 		goto out_put_dnode;
 	}
 
+	if (fsverity_active(cc->inode)) {
+		atomic_set(&dic->verity_pages, cc->nr_cpages);
+		for_verity = true;
+	}
+
 	for (i = 0; i < dic->nr_cpages; i++) {
 		struct page *page = dic->cpages[i];
 		block_t blkaddr;
@@ -2272,7 +2279,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 		if (!bio) {
 			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
 					is_readahead ? REQ_RAHEAD : 0,
-					page->index, for_write);
+					page->index, for_write, for_verity);
 			if (IS_ERR(bio)) {
 				unsigned int remained = dic->nr_cpages - i;
 				bool release = false;
@@ -2280,7 +2287,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 				ret = PTR_ERR(bio);
 				dic->failed = true;
 
-				if (fsverity_active(inode)) {
+				if (for_verity) {
 					if (!atomic_sub_return(remained,
 						&dic->verity_pages))
 						release = true;
-- 
2.29.2.576.ga3fc446d84-goog
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
next prev parent reply	other threads:[~2020-12-04 18:29 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-04  0:58 [f2fs-dev] [PATCH] f2fs: fix race of pending_pages in decompression Daeho Jeong
2020-12-04  3:28 ` Eric Biggers
2020-12-04  3:43   ` Daeho Jeong
2020-12-04  4:01     ` Eric Biggers
2020-12-04  4:31       ` Daeho Jeong
2020-12-04  4:48         ` Daeho Jeong
2020-12-04  4:51           ` Eric Biggers
2020-12-04  5:00             ` Daeho Jeong
2020-12-04  5:18               ` Eric Biggers
2020-12-04  7:01                 ` Daeho Jeong
2020-12-04 18:29                   ` Jaegeuk Kim [this message]
2020-12-05  3:40                     ` Daeho Jeong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox
  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):
  git send-email \
    --in-reply-to=X8qAAiRah/zmowZB@google.com \
    --to=jaegeuk@kernel.org \
    --cc=daeho43@gmail.com \
    --cc=daehojeong@google.com \
    --cc=ebiggers@kernel.org \
    --cc=kernel-team@android.com \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY
  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
  Be sure your reply has a Subject: header at the top and a blank line
  before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).