* [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup
@ 2018-09-10 14:56 Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 1/4] gfs2: add timing info to map_journal_extents Abhi Das
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Abhi Das @ 2018-09-10 14:56 UTC (permalink / raw)
To: cluster-devel.redhat.com
This is a revised version of the patchset I'd posted a few days
ago. It contains fixes and some cleanup suggested by Andreas
and Bob.
It is slightly different in parts from the rhel7 patchset I'd posted
originally, owing to some bits already being present and the hash/crc
computation code being different due to the updated log header structure.
Cheers!
--Abhi
Abhi Das (4):
gfs2: add timing info to map_journal_extents
gfs2: changes to gfs2_log_XXX_bio
gfs2: add a helper function to get_log_header that can be used
elsewhere
gfs2: read journal in large chunks to locate the head
fs/gfs2/bmap.c | 8 ++-
fs/gfs2/incore.h | 8 ++-
fs/gfs2/log.c | 4 +-
fs/gfs2/lops.c | 180 +++++++++++++++++++++++++++++++++++++--------------
fs/gfs2/lops.h | 3 +-
fs/gfs2/ops_fstype.c | 1 +
fs/gfs2/recovery.c | 168 ++++++++++++-----------------------------------
fs/gfs2/recovery.h | 2 +
8 files changed, 194 insertions(+), 180 deletions(-)
--
2.4.11
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 1/4] gfs2: add timing info to map_journal_extents
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
@ 2018-09-10 14:56 ` Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Abhi Das @ 2018-09-10 14:56 UTC (permalink / raw)
To: cluster-devel.redhat.com
Tells you how many milliseconds map_journal_extents takes.
Signed-off-by: Abhi Das <adas@redhat.com>
---
fs/gfs2/bmap.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index 03128ed..dddb5a4 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -14,6 +14,7 @@
#include <linux/gfs2_ondisk.h>
#include <linux/crc32.h>
#include <linux/iomap.h>
+#include <linux/ktime.h>
#include "gfs2.h"
#include "incore.h"
@@ -2248,7 +2249,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
unsigned int shift = sdp->sd_sb.sb_bsize_shift;
u64 size;
int rc;
+ ktime_t start, end;
+ start = ktime_get();
lblock_stop = i_size_read(jd->jd_inode) >> shift;
size = (lblock_stop - lblock) << shift;
jd->nr_extents = 0;
@@ -2268,8 +2271,9 @@ int gfs2_map_journal_extents(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd)
lblock += (bh.b_size >> ip->i_inode.i_blkbits);
} while(size > 0);
- fs_info(sdp, "journal %d mapped with %u extents\n", jd->jd_jid,
- jd->nr_extents);
+ end = ktime_get();
+ fs_info(sdp, "journal %d mapped with %u extents in %lldms\n", jd->jd_jid,
+ jd->nr_extents, ktime_ms_delta(end, start));
return 0;
fail:
--
2.4.11
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 1/4] gfs2: add timing info to map_journal_extents Abhi Das
@ 2018-09-10 14:56 ` Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Abhi Das @ 2018-09-10 14:56 UTC (permalink / raw)
To: cluster-devel.redhat.com
Change gfs2_log_XXX_bio family of functions so they can be used
with read operations also.
This patch also contains some clean up and coalescing of the
above functions suggested by Andreas.
Signed-off-by: Abhi Das <adas@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
fs/gfs2/log.c | 4 +--
fs/gfs2/lops.c | 86 ++++++++++++++++++++++++++--------------------------------
fs/gfs2/lops.h | 2 +-
3 files changed, 41 insertions(+), 51 deletions(-)
diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index ee20ea42..b80fb30 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -731,7 +731,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
lh->lh_crc = cpu_to_be32(crc);
gfs2_log_write(sdp, page, sb->s_blocksize, 0, addr);
- gfs2_log_flush_bio(sdp, REQ_OP_WRITE, op_flags);
+ gfs2_log_flush_bio(&sdp->sd_log_bio, REQ_OP_WRITE, op_flags);
log_flush_wait(sdp);
}
@@ -808,7 +808,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
gfs2_ordered_write(sdp);
lops_before_commit(sdp, tr);
- gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
+ gfs2_log_flush_bio(&sdp->sd_log_bio, REQ_OP_WRITE, 0);
if (sdp->sd_log_head != sdp->sd_log_flush_head) {
log_flush_wait(sdp);
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index f2567f9..f5f31a6 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -229,7 +229,7 @@ static void gfs2_end_log_write(struct bio *bio)
/**
* gfs2_log_flush_bio - Submit any pending log bio
- * @sdp: The superblock
+ * @biop: Address of the bio pointer
* @op: REQ_OP
* @op_flags: req_flag_bits
*
@@ -237,74 +237,61 @@ static void gfs2_end_log_write(struct bio *bio)
* there is no pending bio, then this is a no-op.
*/
-void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags)
+void gfs2_log_flush_bio(struct bio **biop, int op, int op_flags)
{
- if (sdp->sd_log_bio) {
+ struct bio *bio = *biop;
+ if (bio) {
+ struct gfs2_sbd *sdp = bio->bi_private;
atomic_inc(&sdp->sd_log_in_flight);
- bio_set_op_attrs(sdp->sd_log_bio, op, op_flags);
- submit_bio(sdp->sd_log_bio);
- sdp->sd_log_bio = NULL;
+ bio_set_op_attrs(bio, op, op_flags);
+ submit_bio(bio);
+ *biop = NULL;
}
}
/**
- * gfs2_log_alloc_bio - Allocate a new bio for log writing
- * @sdp: The superblock
- * @blkno: The next device block number we want to write to
- *
- * This should never be called when there is a cached bio in the
- * super block. When it returns, there will be a cached bio in the
- * super block which will have as many bio_vecs as the device is
- * happy to handle.
- *
- * Returns: Newly allocated bio
- */
-
-static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno)
-{
- struct super_block *sb = sdp->sd_vfs;
- struct bio *bio;
-
- BUG_ON(sdp->sd_log_bio);
-
- bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
- bio->bi_iter.bi_sector = blkno * (sb->s_blocksize >> 9);
- bio_set_dev(bio, sb->s_bdev);
- bio->bi_end_io = gfs2_end_log_write;
- bio->bi_private = sdp;
-
- sdp->sd_log_bio = bio;
-
- return bio;
-}
-
-/**
* gfs2_log_get_bio - Get cached log bio, or allocate a new one
- * @sdp: The superblock
+ * @sdp: The super block
* @blkno: The device block number we want to write to
+ * @bio: The bio to get or allocate
+ * @op: REQ_OP
+ * @end_io: The bi_end_io callback
+ * @private: The bi_private value
+ * @flush: Always flush the current bio and allocate a new one?
*
* If there is a cached bio, then if the next block number is sequential
* with the previous one, return it, otherwise flush the bio to the
- * device. If there is not a cached bio, or we just flushed it, then
+ * device. If there is no cached bio, or we just flushed it, then
* allocate a new one.
*
* Returns: The bio to use for log writes
*/
-static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno)
+static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno,
+ struct bio **biop, int op,
+ bio_end_io_t *end_io, void *private,
+ bool flush)
{
- struct bio *bio = sdp->sd_log_bio;
- u64 nblk;
+ struct super_block *sb = sdp->sd_vfs;
+ struct bio *bio = *biop;
if (bio) {
+ u64 nblk;
+
nblk = bio_end_sector(bio);
nblk >>= sdp->sd_fsb2bb_shift;
- if (blkno == nblk)
+ if (blkno == nblk && !flush)
return bio;
- gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
+ gfs2_log_flush_bio(biop, op, 0);
}
- return gfs2_log_alloc_bio(sdp, blkno);
+ bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
+ *biop = bio;
+ bio->bi_iter.bi_sector = blkno * (sb->s_blocksize >> 9);
+ bio_set_dev(bio, sb->s_bdev);
+ bio->bi_end_io = end_io;
+ bio->bi_private = private;
+ return bio;
}
/**
@@ -326,11 +313,14 @@ void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
struct bio *bio;
int ret;
- bio = gfs2_log_get_bio(sdp, blkno);
+ bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio,
+ REQ_OP_WRITE, gfs2_end_log_write,
+ sdp, false);
ret = bio_add_page(bio, page, size, offset);
if (ret == 0) {
- gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0);
- bio = gfs2_log_alloc_bio(sdp, blkno);
+ bio = gfs2_log_get_bio(sdp, blkno, &sdp->sd_log_bio,
+ REQ_OP_WRITE, gfs2_end_log_write,
+ sdp, true);
ret = bio_add_page(bio, page, size, offset);
WARN_ON(ret == 0);
}
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index e494939..d709d99 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -30,7 +30,7 @@ extern u64 gfs2_log_bmap(struct gfs2_sbd *sdp);
extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
unsigned size, unsigned offset, u64 blkno);
extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page);
-extern void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags);
+extern void gfs2_log_flush_bio(struct bio **biop, int op, int op_flags);
extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
--
2.4.11
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 1/4] gfs2: add timing info to map_journal_extents Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
@ 2018-09-10 14:56 ` Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Abhi Das @ 2018-09-10 14:56 UTC (permalink / raw)
To: cluster-devel.redhat.com
Move and re-order the error checks and hash/crc computations into another
function __get_log_header() so it can be used in scenarios where buffer_heads
are not being used for the log header.
Signed-off-by: Abhi Das <adas@redhat.com>
---
fs/gfs2/recovery.c | 53 ++++++++++++++++++++++++++++++++---------------------
fs/gfs2/recovery.h | 2 ++
2 files changed, 34 insertions(+), 21 deletions(-)
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index 0f501f9..1b95294 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -120,6 +120,35 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
}
}
+int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
+ unsigned int blkno, struct gfs2_log_header_host *head)
+{
+ u32 hash, crc;
+
+ if (lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
+ lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
+ (blkno && be32_to_cpu(lh->lh_blkno) != blkno))
+ return 1;
+
+ hash = crc32(~0, lh, LH_V1_SIZE - 4);
+ hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */
+
+ if (be32_to_cpu(lh->lh_hash) != hash)
+ return 1;
+
+ crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
+ sdp->sd_sb.sb_bsize - LH_V1_SIZE - 4);
+
+ if ((lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc))
+ return 1;
+
+ head->lh_sequence = be64_to_cpu(lh->lh_sequence);
+ head->lh_flags = be32_to_cpu(lh->lh_flags);
+ head->lh_tail = be32_to_cpu(lh->lh_tail);
+ head->lh_blkno = be32_to_cpu(lh->lh_blkno);
+
+ return 0;
+}
/**
* get_log_header - read the log header for a given segment
* @jd: the journal
@@ -137,36 +166,18 @@ void gfs2_revoke_clean(struct gfs2_jdesc *jd)
static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
struct gfs2_log_header_host *head)
{
- struct gfs2_log_header *lh;
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
struct buffer_head *bh;
- u32 hash, crc;
int error;
error = gfs2_replay_read_block(jd, blk, &bh);
if (error)
return error;
- lh = (void *)bh->b_data;
-
- hash = crc32(~0, lh, LH_V1_SIZE - 4);
- hash = ~crc32_le_shift(hash, 4); /* assume lh_hash is zero */
-
- crc = crc32c(~0, (void *)lh + LH_V1_SIZE + 4,
- bh->b_size - LH_V1_SIZE - 4);
-
- error = lh->lh_header.mh_magic != cpu_to_be32(GFS2_MAGIC) ||
- lh->lh_header.mh_type != cpu_to_be32(GFS2_METATYPE_LH) ||
- be32_to_cpu(lh->lh_blkno) != blk ||
- be32_to_cpu(lh->lh_hash) != hash ||
- (lh->lh_crc != 0 && be32_to_cpu(lh->lh_crc) != crc);
+ error = __get_log_header(sdp, (const struct gfs2_log_header *)bh->b_data,
+ blk, head);
brelse(bh);
- if (!error) {
- head->lh_sequence = be64_to_cpu(lh->lh_sequence);
- head->lh_flags = be32_to_cpu(lh->lh_flags);
- head->lh_tail = be32_to_cpu(lh->lh_tail);
- head->lh_blkno = be32_to_cpu(lh->lh_blkno);
- }
return error;
}
diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h
index 11fdfab..943a67c 100644
--- a/fs/gfs2/recovery.h
+++ b/fs/gfs2/recovery.h
@@ -31,6 +31,8 @@ extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
struct gfs2_log_header_host *head);
extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
extern void gfs2_recover_func(struct work_struct *work);
+extern int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
+ unsigned int blkno, struct gfs2_log_header_host *head);
#endif /* __RECOVERY_DOT_H__ */
--
2.4.11
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
` (2 preceding siblings ...)
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
@ 2018-09-10 14:56 ` Abhi Das
2018-09-10 15:35 ` [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Andreas Gruenbacher
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Abhi Das @ 2018-09-10 14:56 UTC (permalink / raw)
To: cluster-devel.redhat.com
Use bio(s) to read in the journal sequentially in large chunks and
locate the head of the journal.
This is faster in most cases when compared to the existing bisect
method which operates one block at a time.
Signed-off-by: Abhi Das <adas@redhat.com>
---
fs/gfs2/incore.h | 8 +++-
fs/gfs2/lops.c | 96 +++++++++++++++++++++++++++++++++++++++++-
fs/gfs2/lops.h | 1 +
fs/gfs2/ops_fstype.c | 1 +
fs/gfs2/recovery.c | 115 +++++----------------------------------------------
5 files changed, 114 insertions(+), 107 deletions(-)
diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
index b96d39c..b24c105 100644
--- a/fs/gfs2/incore.h
+++ b/fs/gfs2/incore.h
@@ -529,6 +529,11 @@ struct gfs2_journal_extent {
u64 blocks;
};
+enum {
+ JDF_RECOVERY = 1,
+ JDF_JHEAD = 2,
+};
+
struct gfs2_jdesc {
struct list_head jd_list;
struct list_head extent_list;
@@ -536,12 +541,13 @@ struct gfs2_jdesc {
struct work_struct jd_work;
struct inode *jd_inode;
unsigned long jd_flags;
-#define JDF_RECOVERY 1
unsigned int jd_jid;
unsigned int jd_blocks;
int jd_recover_error;
/* Replay stuff */
+ struct gfs2_log_header_host jd_jhead;
+ struct bio *jd_rd_bio; /* bio used for reading this journal */
unsigned int jd_found_blocks;
unsigned int jd_found_revokes;
unsigned int jd_replayed_blocks;
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index f5f31a6..24d5dba 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -18,6 +18,7 @@
#include <linux/fs.h>
#include <linux/list_sort.h>
+#include "bmap.h"
#include "dir.h"
#include "gfs2.h"
#include "incore.h"
@@ -227,6 +228,50 @@ static void gfs2_end_log_write(struct bio *bio)
wake_up(&sdp->sd_log_flush_wait);
}
+static void gfs2_end_log_read(struct bio *bio)
+{
+ struct gfs2_jdesc *jd = bio->bi_private;
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+ struct page *page;
+ struct bio_vec *bvec;
+ int i, last;
+
+ if (bio->bi_status) {
+ fs_err(sdp, "Error %d reading from journal, jid=%u\n",
+ bio->bi_status, jd->jd_jid);
+ }
+
+ bio_for_each_segment_all(bvec, bio, i) {
+ struct gfs2_log_header_host uninitialized_var(lh);
+ void *ptr;
+
+ page = bvec->bv_page;
+ ptr = page_address(page);
+ last = page_private(page);
+
+ if (!test_bit(JDF_JHEAD, &jd->jd_flags)) {
+ mempool_free(page, gfs2_page_pool);
+ continue;
+ }
+
+ if (!__get_log_header(sdp, ptr, 0, &lh)) {
+ if (lh.lh_sequence > jd->jd_jhead.lh_sequence)
+ jd->jd_jhead = lh;
+ else
+ goto found;
+ }
+
+ if (last) {
+ found:
+ clear_bit(JDF_JHEAD, &jd->jd_flags);
+ wake_up_bit(&jd->jd_flags, JDF_JHEAD);
+ }
+ mempool_free(page, gfs2_page_pool);
+ }
+
+ bio_put(bio);
+}
+
/**
* gfs2_log_flush_bio - Submit any pending log bio
* @biop: Address of the bio pointer
@@ -241,8 +286,10 @@ void gfs2_log_flush_bio(struct bio **biop, int op, int op_flags)
{
struct bio *bio = *biop;
if (bio) {
- struct gfs2_sbd *sdp = bio->bi_private;
- atomic_inc(&sdp->sd_log_in_flight);
+ if (op != REQ_OP_READ) {
+ struct gfs2_sbd *sdp = bio->bi_private;
+ atomic_inc(&sdp->sd_log_in_flight);
+ }
bio_set_op_attrs(bio, op, op_flags);
submit_bio(bio);
*biop = NULL;
@@ -360,6 +407,51 @@ void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page)
gfs2_log_bmap(sdp));
}
+static void gfs2_log_read_extent(struct gfs2_jdesc *jd, u64 dblock,
+ unsigned int blocks, int last)
+{
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+ struct super_block *sb = sdp->sd_vfs;
+ struct page *page;
+ int i, ret;
+ struct bio *bio;
+
+ for (i = 0; i < blocks; i++) {
+ page = mempool_alloc(gfs2_page_pool, GFP_NOIO);
+ /* flag the last page of the journal we plan to read in */
+ page_private(page) = (last && i == (blocks - 1));
+
+ bio = gfs2_log_get_bio(sdp, dblock + i, &jd->jd_rd_bio,
+ REQ_OP_READ, gfs2_end_log_read,
+ jd, false);
+ ret = bio_add_page(bio, page, sb->s_blocksize, 0);
+ if (ret == 0) {
+ bio = gfs2_log_get_bio(sdp, dblock + i, &jd->jd_rd_bio,
+ REQ_OP_READ, gfs2_end_log_read,
+ jd, true);
+ ret = bio_add_page(bio, page, sb->s_blocksize, 0);
+ WARN_ON(ret == 0);
+ }
+ bio->bi_private = jd;
+ }
+}
+
+void gfs2_log_read(struct gfs2_jdesc *jd)
+{
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+ int last = 0;
+ struct gfs2_journal_extent *je;
+
+ if (list_empty(&jd->extent_list))
+ gfs2_map_journal_extents(sdp, jd);
+
+ list_for_each_entry(je, &jd->extent_list, list) {
+ last = list_is_last(&je->list, &jd->extent_list);
+ gfs2_log_read_extent(jd, je->dblock, je->blocks, last);
+ gfs2_log_flush_bio(&jd->jd_rd_bio, REQ_OP_READ, 0);
+ }
+}
+
static struct page *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type,
u32 ld_length, u32 ld_data1)
{
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index d709d99..23392c5d 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -32,6 +32,7 @@ extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page);
extern void gfs2_log_flush_bio(struct bio **biop, int op, int op_flags);
extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
+extern void gfs2_log_read(struct gfs2_jdesc *jd);
static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
{
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index c2469833b..dcc488b4 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -578,6 +578,7 @@ static int gfs2_jindex_hold(struct gfs2_sbd *sdp, struct gfs2_holder *ji_gh)
kfree(jd);
break;
}
+ jd->jd_rd_bio = NULL;
spin_lock(&sdp->sd_jindex_spin);
jd->jd_jid = sdp->sd_journals++;
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index 1b95294..0f80f25 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -182,85 +182,11 @@ static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
}
/**
- * find_good_lh - find a good log header
- * @jd: the journal
- * @blk: the segment to start searching from
- * @lh: the log header to fill in
- * @forward: if true search forward in the log, else search backward
- *
- * Call get_log_header() to get a log header for a segment, but if the
- * segment is bad, either scan forward or backward until we find a good one.
- *
- * Returns: errno
- */
-
-static int find_good_lh(struct gfs2_jdesc *jd, unsigned int *blk,
- struct gfs2_log_header_host *head)
-{
- unsigned int orig_blk = *blk;
- int error;
-
- for (;;) {
- error = get_log_header(jd, *blk, head);
- if (error <= 0)
- return error;
-
- if (++*blk == jd->jd_blocks)
- *blk = 0;
-
- if (*blk == orig_blk) {
- gfs2_consist_inode(GFS2_I(jd->jd_inode));
- return -EIO;
- }
- }
-}
-
-/**
- * jhead_scan - make sure we've found the head of the log
- * @jd: the journal
- * @head: this is filled in with the log descriptor of the head
- *
- * At this point, seg and lh should be either the head of the log or just
- * before. Scan forward until we find the head.
- *
- * Returns: errno
- */
-
-static int jhead_scan(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
-{
- unsigned int blk = head->lh_blkno;
- struct gfs2_log_header_host lh;
- int error;
-
- for (;;) {
- if (++blk == jd->jd_blocks)
- blk = 0;
-
- error = get_log_header(jd, blk, &lh);
- if (error < 0)
- return error;
- if (error == 1)
- continue;
-
- if (lh.lh_sequence == head->lh_sequence) {
- gfs2_consist_inode(GFS2_I(jd->jd_inode));
- return -EIO;
- }
- if (lh.lh_sequence < head->lh_sequence)
- break;
-
- *head = lh;
- }
-
- return 0;
-}
-
-/**
* gfs2_find_jhead - find the head of a log
* @jd: the journal
* @head: the log descriptor for the head of the log is returned here
*
- * Do a binary search of a journal and find the valid log entry with the
+ * Do a search of a journal and find the valid log entry with the
* highest sequence number. (i.e. the log head)
*
* Returns: errno
@@ -268,38 +194,19 @@ static int jhead_scan(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
{
- struct gfs2_log_header_host lh_1, lh_m;
- u32 blk_1, blk_2, blk_m;
- int error;
-
- blk_1 = 0;
- blk_2 = jd->jd_blocks - 1;
-
- for (;;) {
- blk_m = (blk_1 + blk_2) / 2;
-
- error = find_good_lh(jd, &blk_1, &lh_1);
- if (error)
- return error;
-
- error = find_good_lh(jd, &blk_m, &lh_m);
- if (error)
- return error;
-
- if (blk_1 == blk_m || blk_m == blk_2)
- break;
+ int error = 0;
- if (lh_1.lh_sequence <= lh_m.lh_sequence)
- blk_1 = blk_m;
- else
- blk_2 = blk_m;
- }
+ memset(&jd->jd_jhead, 0, sizeof(struct gfs2_log_header_host));
+ set_bit(JDF_JHEAD, &jd->jd_flags);
+ gfs2_log_read(jd);
- error = jhead_scan(jd, &lh_1);
- if (error)
- return error;
+ if (test_bit(JDF_JHEAD, &jd->jd_flags))
+ wait_on_bit(&jd->jd_flags, JDF_JHEAD, TASK_INTERRUPTIBLE);
- *head = lh_1;
+ if (jd->jd_jhead.lh_sequence == 0)
+ error = -EIO;
+ else
+ *head = jd->jd_jhead;
return error;
}
--
2.4.11
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
` (3 preceding siblings ...)
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
@ 2018-09-10 15:35 ` Andreas Gruenbacher
2018-09-10 15:46 ` Bob Peterson
2018-09-10 18:46 ` Abhijith Das
6 siblings, 0 replies; 10+ messages in thread
From: Andreas Gruenbacher @ 2018-09-10 15:35 UTC (permalink / raw)
To: cluster-devel.redhat.com
On 10 September 2018 at 16:56, Abhi Das <adas@redhat.com> wrote:
> This is a revised version of the patchset I'd posted a few days
> ago. It contains fixes and some cleanup suggested by Andreas
> and Bob.
>
> It is slightly different in parts from the rhel7 patchset I'd posted
> originally, owing to some bits already being present and the hash/crc
> computation code being different due to the updated log header structure.
The patches are looking good to me. I've put them on a separate branch
on kernel.org for the next merge window.
Thanks,
Andreas
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
` (4 preceding siblings ...)
2018-09-10 15:35 ` [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Andreas Gruenbacher
@ 2018-09-10 15:46 ` Bob Peterson
2018-09-10 16:28 ` Abhijith Das
2018-09-10 18:46 ` Abhijith Das
6 siblings, 1 reply; 10+ messages in thread
From: Bob Peterson @ 2018-09-10 15:46 UTC (permalink / raw)
To: cluster-devel.redhat.com
----- Original Message -----
> This is a revised version of the patchset I'd posted a few days
> ago. It contains fixes and some cleanup suggested by Andreas
> and Bob.
>
> It is slightly different in parts from the rhel7 patchset I'd posted
> originally, owing to some bits already being present and the hash/crc
> computation code being different due to the updated log header structure.
>
> Cheers!
> --Abhi
>
> Abhi Das (4):
> gfs2: add timing info to map_journal_extents
> gfs2: changes to gfs2_log_XXX_bio
> gfs2: add a helper function to get_log_header that can be used
> elsewhere
> gfs2: read journal in large chunks to locate the head
>
> fs/gfs2/bmap.c | 8 ++-
> fs/gfs2/incore.h | 8 ++-
> fs/gfs2/log.c | 4 +-
> fs/gfs2/lops.c | 180
> +++++++++++++++++++++++++++++++++++++--------------
> fs/gfs2/lops.h | 3 +-
> fs/gfs2/ops_fstype.c | 1 +
> fs/gfs2/recovery.c | 168 ++++++++++++-----------------------------------
> fs/gfs2/recovery.h | 2 +
> 8 files changed, 194 insertions(+), 180 deletions(-)
>
> --
> 2.4.11
>
>
Hi,
The patch set looks good to me. I assume you've tested it on file systems
in which block size < page size, right?
Reviewed-by: Bob Peterson <rpeterso@redhat.com>
Regards,
Bob Peterson
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup
2018-09-10 15:46 ` Bob Peterson
@ 2018-09-10 16:28 ` Abhijith Das
0 siblings, 0 replies; 10+ messages in thread
From: Abhijith Das @ 2018-09-10 16:28 UTC (permalink / raw)
To: cluster-devel.redhat.com
On Mon, Sep 10, 2018 at 10:46 AM Bob Peterson <rpeterso@redhat.com> wrote:
> ----- Original Message -----
> > This is a revised version of the patchset I'd posted a few days
> > ago. It contains fixes and some cleanup suggested by Andreas
> > and Bob.
> >
> > It is slightly different in parts from the rhel7 patchset I'd posted
> > originally, owing to some bits already being present and the hash/crc
> > computation code being different due to the updated log header structure.
> >
> > Cheers!
> > --Abhi
> >
> > Abhi Das (4):
> > gfs2: add timing info to map_journal_extents
> > gfs2: changes to gfs2_log_XXX_bio
> > gfs2: add a helper function to get_log_header that can be used
> > elsewhere
> > gfs2: read journal in large chunks to locate the head
> >
> > fs/gfs2/bmap.c | 8 ++-
> > fs/gfs2/incore.h | 8 ++-
> > fs/gfs2/log.c | 4 +-
> > fs/gfs2/lops.c | 180
> > +++++++++++++++++++++++++++++++++++++--------------
> > fs/gfs2/lops.h | 3 +-
> > fs/gfs2/ops_fstype.c | 1 +
> > fs/gfs2/recovery.c | 168
> ++++++++++++-----------------------------------
> > fs/gfs2/recovery.h | 2 +
> > 8 files changed, 194 insertions(+), 180 deletions(-)
> >
> > --
> > 2.4.11
> >
> >
> Hi,
>
> The patch set looks good to me. I assume you've tested it on file systems
> in which block size < page size, right?
>
> Yes. But this upstream version has not been tested as much as the rhel7
one I'd posted a few weeks ago. The rhel7 patchset was tested against 2 and
4 node clusters with revolver and passed.
With changes and cleanups, this upstream version has diverged from the
rhel7 one, so I'm going to backport again to rhel7 and have QE re-test it.
I'll also do more single-node testing on upstream and report. I don't have
an upstream cluster... Any volunteers for upstream cluster testing?
Cheer!
--Abhi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/cluster-devel/attachments/20180910/fffbf4d8/attachment.htm>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
` (5 preceding siblings ...)
2018-09-10 15:46 ` Bob Peterson
@ 2018-09-10 18:46 ` Abhijith Das
6 siblings, 0 replies; 10+ messages in thread
From: Abhijith Das @ 2018-09-10 18:46 UTC (permalink / raw)
To: cluster-devel.redhat.com
NACK for now. Andreas reported that xfstests fails with consistency errors
with this patchset. I'm able to reproduce it also and I am trying to figure
out what's going on. Hopefully I'll have revised patches soon.
Cheers!
--Abhi
On Mon, Sep 10, 2018 at 9:56 AM Abhi Das <adas@redhat.com> wrote:
> This is a revised version of the patchset I'd posted a few days
> ago. It contains fixes and some cleanup suggested by Andreas
> and Bob.
>
> It is slightly different in parts from the rhel7 patchset I'd posted
> originally, owing to some bits already being present and the hash/crc
> computation code being different due to the updated log header structure.
>
> Cheers!
> --Abhi
>
> Abhi Das (4):
> gfs2: add timing info to map_journal_extents
> gfs2: changes to gfs2_log_XXX_bio
> gfs2: add a helper function to get_log_header that can be used
> elsewhere
> gfs2: read journal in large chunks to locate the head
>
> fs/gfs2/bmap.c | 8 ++-
> fs/gfs2/incore.h | 8 ++-
> fs/gfs2/log.c | 4 +-
> fs/gfs2/lops.c | 180
> +++++++++++++++++++++++++++++++++++++--------------
> fs/gfs2/lops.h | 3 +-
> fs/gfs2/ops_fstype.c | 1 +
> fs/gfs2/recovery.c | 168 ++++++++++++-----------------------------------
> fs/gfs2/recovery.h | 2 +
> 8 files changed, 194 insertions(+), 180 deletions(-)
>
> --
> 2.4.11
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/cluster-devel/attachments/20180910/77f11824/attachment.htm>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head
@ 2018-10-19 4:29 Abhi Das
0 siblings, 0 replies; 10+ messages in thread
From: Abhi Das @ 2018-10-19 4:29 UTC (permalink / raw)
To: cluster-devel.redhat.com
Use bio(s) to read in the journal sequentially in large chunks and
locate the head of the journal.
This version addresses the issues Christoph pointed out w.r.t error handling
and using deprecated API.
Signed-off-by: Abhi Das <adas@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
---
fs/gfs2/glops.c | 1 +
fs/gfs2/log.c | 4 +-
fs/gfs2/lops.c | 190 +++++++++++++++++++++++++++++++++++++++++++++++++--
fs/gfs2/lops.h | 4 +-
fs/gfs2/ops_fstype.c | 1 +
fs/gfs2/recovery.c | 123 ---------------------------------
fs/gfs2/recovery.h | 2 -
fs/gfs2/super.c | 1 +
8 files changed, 192 insertions(+), 134 deletions(-)
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
index c63bee9..f79ef95 100644
--- a/fs/gfs2/glops.c
+++ b/fs/gfs2/glops.c
@@ -28,6 +28,7 @@
#include "util.h"
#include "trans.h"
#include "dir.h"
+#include "lops.h"
struct workqueue_struct *gfs2_freeze_wq;
diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index 93a94df..c68a829 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -734,7 +734,7 @@ void gfs2_write_log_header(struct gfs2_sbd *sdp, struct gfs2_jdesc *jd,
lh->lh_crc = cpu_to_be32(crc);
gfs2_log_write(sdp, page, sb->s_blocksize, 0, addr);
- gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE, op_flags);
+ gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE | op_flags);
log_flush_wait(sdp);
}
@@ -811,7 +811,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
gfs2_ordered_write(sdp);
lops_before_commit(sdp, tr);
- gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE, 0);
+ gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE);
if (sdp->sd_log_head != sdp->sd_log_flush_head) {
log_flush_wait(sdp);
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 2295042..94dcab6 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -17,7 +17,9 @@
#include <linux/bio.h>
#include <linux/fs.h>
#include <linux/list_sort.h>
+#include <linux/blkdev.h>
+#include "bmap.h"
#include "dir.h"
#include "gfs2.h"
#include "incore.h"
@@ -193,7 +195,6 @@ static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, struct bio_vec *bvec,
/**
* gfs2_end_log_write - end of i/o to the log
* @bio: The bio
- * @error: Status of i/o request
*
* Each bio_vec contains either data from the pagecache or data
* relating to the log itself. Here we iterate over the bio_vec
@@ -230,20 +231,19 @@ static void gfs2_end_log_write(struct bio *bio)
/**
* gfs2_log_submit_bio - Submit any pending log bio
* @biop: Address of the bio pointer
- * @op: REQ_OP
- * @op_flags: req_flag_bits
+ * @opf: REQ_OP | op_flags
*
* Submit any pending part-built or full bio to the block device. If
* there is no pending bio, then this is a no-op.
*/
-void gfs2_log_submit_bio(struct bio **biop, int op, int op_flags)
+void gfs2_log_submit_bio(struct bio **biop, int opf)
{
struct bio *bio = *biop;
if (bio) {
struct gfs2_sbd *sdp = bio->bi_private;
atomic_inc(&sdp->sd_log_in_flight);
- bio_set_op_attrs(bio, op, op_flags);
+ bio->bi_opf = opf;
submit_bio(bio);
*biop = NULL;
}
@@ -304,7 +304,7 @@ static struct bio *gfs2_log_get_bio(struct gfs2_sbd *sdp, u64 blkno,
nblk >>= sdp->sd_fsb2bb_shift;
if (blkno == nblk && !flush)
return bio;
- gfs2_log_submit_bio(biop, op, 0);
+ gfs2_log_submit_bio(biop, op);
}
*biop = gfs2_log_alloc_bio(sdp, blkno, end_io);
@@ -375,6 +375,184 @@ void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page)
gfs2_log_bmap(sdp));
}
+/**
+ * gfs2_end_log_read - end I/O callback for reads from the log
+ * @bio: The bio
+ *
+ * Simply unlock the pages in the bio. The main thread will wait on them and
+ * process them in order as necessary.
+ */
+
+static void gfs2_end_log_read(struct bio *bio)
+{
+ struct page *page;
+ struct bio_vec *bvec;
+ int i;
+
+ bio_for_each_segment_all(bvec, bio, i) {
+ page = bvec->bv_page;
+ if (bio->bi_status) {
+ int err = blk_status_to_errno(bio->bi_status);
+
+ SetPageError(page);
+ mapping_set_error(page->mapping, err);
+ }
+ unlock_page(page);
+ }
+
+ bio_put(bio);
+}
+
+/**
+ * gfs2_jhead_pg_srch - Look for the journal head in a given page.
+ * @jd: The journal descriptor
+ * @page: The page to look in
+ *
+ * Returns: 1 if found, 0 otherwise.
+ */
+
+static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
+ struct gfs2_log_header_host *head,
+ struct page *page)
+{
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+ struct gfs2_log_header_host uninitialized_var(lh);
+ void *kaddr = kmap_atomic(page);
+ unsigned int offset;
+ bool ret = false;
+
+ for (offset = 0; offset < PAGE_SIZE; offset += sdp->sd_sb.sb_bsize) {
+ if (!__get_log_header(sdp, kaddr + offset, 0, &lh)) {
+ if (lh.lh_sequence > head->lh_sequence)
+ *head = lh;
+ else {
+ ret = true;
+ break;
+ }
+ }
+ }
+ kunmap_atomic(kaddr);
+ return ret;
+}
+
+/**
+ * gfs2_jhead_process_page - Search/cleanup a page
+ * @jd: The journal descriptor
+ * @index: Index of the page to look into
+ * @done: If set, perform only cleanup, else search and set if found.
+ *
+ * Find the page with 'index' in the journal's mapping. Search the page for
+ * the journal head if requested (cleanup == false). Release refs on the
+ * page so the page cache can reclaim it (put_page() twice). We grabbed a
+ * reference on this page two times, first when we did a find_or_create_page()
+ * to obtain the page to add it to the bio and second when we do a
+ * find_get_page() here to get the page to wait on while I/O on it is being
+ * completed.
+ * This function is also used to free up a page we might've grabbed but not
+ * used. Maybe we added it to a bio, but not submitted it for I/O. Or we
+ * submitted the I/O, but we already found the jhead so we only need to drop
+ * our references to the page.
+ */
+
+static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
+ struct gfs2_log_header_host *head,
+ bool *done)
+{
+ struct page *page;
+
+ page = find_get_page(jd->jd_inode->i_mapping, index);
+ wait_on_page_locked(page);
+
+ if (PageError(page))
+ *done = true;
+
+ if (!*done)
+ *done = gfs2_jhead_pg_srch(jd, head, page);
+
+ put_page(page); /* Once for find_get_page */
+ put_page(page); /* Once more for find_or_create_page */
+}
+
+/**
+ * gfs2_find_jhead - find the head of a log
+ * @jd: The journal descriptor
+ * @head: The log descriptor for the head of the log is returned here
+ *
+ * Do a search of a journal by reading it in large chunks using bios and find
+ * the valid log entry with the highest sequence number. (i.e. the log head)
+ *
+ * Returns: 0 on success, errno otherwise
+ */
+
+int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
+{
+ struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
+ struct address_space *mapping = jd->jd_inode->i_mapping;
+ struct gfs2_journal_extent *je;
+ u32 block, read_idx = 0, submit_idx = 0, index = 0;
+ int shift = PAGE_SHIFT - sdp->sd_sb.sb_bsize_shift;
+ int blocks_per_page = 1 << shift, sz, ret = 0;
+ struct bio *bio = NULL;
+ struct page *page;
+ bool done = false;
+ errseq_t since;
+
+ memset(head, 0, sizeof(*head));
+ if (list_empty(&jd->extent_list))
+ gfs2_map_journal_extents(sdp, jd);
+
+ since = filemap_sample_wb_err(mapping);
+ list_for_each_entry(je, &jd->extent_list, list) {
+ for (block = 0; block < je->blocks; block += blocks_per_page) {
+ index = (je->lblock + block) >> shift;
+
+ page = find_or_create_page(mapping, index, GFP_NOFS);
+ if (!page) {
+ ret = -ENOMEM;
+ done = true;
+ goto out;
+ }
+
+ if (bio) {
+ sz = bio_add_page(bio, page, PAGE_SIZE, 0);
+ if (sz == PAGE_SIZE)
+ goto page_added;
+ submit_idx = index;
+ submit_bio(bio);
+ bio = NULL;
+ }
+
+ bio = gfs2_log_alloc_bio(sdp,
+ je->dblock + (index << shift),
+ gfs2_end_log_read);
+ bio->bi_opf = REQ_OP_READ;
+ sz = bio_add_page(bio, page, PAGE_SIZE, 0);
+ gfs2_assert_warn(sdp, sz == PAGE_SIZE);
+
+page_added:
+ if (submit_idx <= read_idx + BIO_MAX_PAGES) {
+ /* Keep@least one bio in flight */
+ continue;
+ }
+
+ gfs2_jhead_process_page(jd, read_idx++, head, &done);
+ if (done)
+ goto out; /* found */
+ }
+ }
+
+out:
+ if (bio)
+ submit_bio(bio);
+ while (read_idx <= index)
+ gfs2_jhead_process_page(jd, read_idx++, head, &done);
+
+ if (!ret)
+ ret = filemap_check_wb_err(mapping, since);
+
+ return ret;
+}
+
static struct page *gfs2_get_log_desc(struct gfs2_sbd *sdp, u32 ld_type,
u32 ld_length, u32 ld_data1)
{
diff --git a/fs/gfs2/lops.h b/fs/gfs2/lops.h
index 711c4d8..331160f 100644
--- a/fs/gfs2/lops.h
+++ b/fs/gfs2/lops.h
@@ -30,8 +30,10 @@ extern u64 gfs2_log_bmap(struct gfs2_sbd *sdp);
extern void gfs2_log_write(struct gfs2_sbd *sdp, struct page *page,
unsigned size, unsigned offset, u64 blkno);
extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page);
-extern void gfs2_log_submit_bio(struct bio **biop, int op, int op_flags);
+extern void gfs2_log_submit_bio(struct bio **biop, int opf);
extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh);
+extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
+ struct gfs2_log_header_host *head);
static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
{
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 4ec69d9..ae3ee51 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -41,6 +41,7 @@
#include "dir.h"
#include "meta_io.h"
#include "trace_gfs2.h"
+#include "lops.h"
#define DO 0
#define UNDO 1
diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c
index 2dac430..7389e44 100644
--- a/fs/gfs2/recovery.c
+++ b/fs/gfs2/recovery.c
@@ -182,129 +182,6 @@ static int get_log_header(struct gfs2_jdesc *jd, unsigned int blk,
}
/**
- * find_good_lh - find a good log header
- * @jd: the journal
- * @blk: the segment to start searching from
- * @lh: the log header to fill in
- * @forward: if true search forward in the log, else search backward
- *
- * Call get_log_header() to get a log header for a segment, but if the
- * segment is bad, either scan forward or backward until we find a good one.
- *
- * Returns: errno
- */
-
-static int find_good_lh(struct gfs2_jdesc *jd, unsigned int *blk,
- struct gfs2_log_header_host *head)
-{
- unsigned int orig_blk = *blk;
- int error;
-
- for (;;) {
- error = get_log_header(jd, *blk, head);
- if (error <= 0)
- return error;
-
- if (++*blk == jd->jd_blocks)
- *blk = 0;
-
- if (*blk == orig_blk) {
- gfs2_consist_inode(GFS2_I(jd->jd_inode));
- return -EIO;
- }
- }
-}
-
-/**
- * jhead_scan - make sure we've found the head of the log
- * @jd: the journal
- * @head: this is filled in with the log descriptor of the head
- *
- * At this point, seg and lh should be either the head of the log or just
- * before. Scan forward until we find the head.
- *
- * Returns: errno
- */
-
-static int jhead_scan(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
-{
- unsigned int blk = head->lh_blkno;
- struct gfs2_log_header_host lh;
- int error;
-
- for (;;) {
- if (++blk == jd->jd_blocks)
- blk = 0;
-
- error = get_log_header(jd, blk, &lh);
- if (error < 0)
- return error;
- if (error == 1)
- continue;
-
- if (lh.lh_sequence == head->lh_sequence) {
- gfs2_consist_inode(GFS2_I(jd->jd_inode));
- return -EIO;
- }
- if (lh.lh_sequence < head->lh_sequence)
- break;
-
- *head = lh;
- }
-
- return 0;
-}
-
-/**
- * gfs2_find_jhead - find the head of a log
- * @jd: the journal
- * @head: the log descriptor for the head of the log is returned here
- *
- * Do a binary search of a journal and find the valid log entry with the
- * highest sequence number. (i.e. the log head)
- *
- * Returns: errno
- */
-
-int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head)
-{
- struct gfs2_log_header_host lh_1, lh_m;
- u32 blk_1, blk_2, blk_m;
- int error;
-
- blk_1 = 0;
- blk_2 = jd->jd_blocks - 1;
-
- for (;;) {
- blk_m = (blk_1 + blk_2) / 2;
-
- error = find_good_lh(jd, &blk_1, &lh_1);
- if (error)
- return error;
-
- error = find_good_lh(jd, &blk_m, &lh_m);
- if (error)
- return error;
-
- if (blk_1 == blk_m || blk_m == blk_2)
- break;
-
- if (lh_1.lh_sequence <= lh_m.lh_sequence)
- blk_1 = blk_m;
- else
- blk_2 = blk_m;
- }
-
- error = jhead_scan(jd, &lh_1);
- if (error)
- return error;
-
- *head = lh_1;
-
- return error;
-}
-
-/**
* foreach_descriptor - go through the active part of the log
* @jd: the journal
* @start: the first log header in the active region
diff --git a/fs/gfs2/recovery.h b/fs/gfs2/recovery.h
index 943a67c..4d00a92 100644
--- a/fs/gfs2/recovery.h
+++ b/fs/gfs2/recovery.h
@@ -27,8 +27,6 @@ extern int gfs2_revoke_add(struct gfs2_jdesc *jd, u64 blkno, unsigned int where)
extern int gfs2_revoke_check(struct gfs2_jdesc *jd, u64 blkno, unsigned int where);
extern void gfs2_revoke_clean(struct gfs2_jdesc *jd);
-extern int gfs2_find_jhead(struct gfs2_jdesc *jd,
- struct gfs2_log_header_host *head);
extern int gfs2_recover_journal(struct gfs2_jdesc *gfs2_jd, bool wait);
extern void gfs2_recover_func(struct work_struct *work);
extern int __get_log_header(struct gfs2_sbd *sdp, const struct gfs2_log_header *lh,
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index a971862..ae38ba7 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -45,6 +45,7 @@
#include "util.h"
#include "sys.h"
#include "xattr.h"
+#include "lops.h"
#define args_neq(a1, a2, x) ((a1)->ar_##x != (a2)->ar_##x)
--
2.4.11
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-10-19 4:29 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-09-10 14:56 [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 1/4] gfs2: add timing info to map_journal_extents Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 2/4] gfs2: changes to gfs2_log_XXX_bio Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 3/4] gfs2: add a helper function to get_log_header that can be used elsewhere Abhi Das
2018-09-10 14:56 ` [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
2018-09-10 15:35 ` [Cluster-devel] [GFS2 v2 PATCH 0/4] Speed up journal head lookup Andreas Gruenbacher
2018-09-10 15:46 ` Bob Peterson
2018-09-10 16:28 ` Abhijith Das
2018-09-10 18:46 ` Abhijith Das
-- strict thread matches above, loose matches on Subject: below --
2018-10-19 4:29 [Cluster-devel] [GFS2 v2 PATCH 4/4] gfs2: read journal in large chunks to locate the head Abhi Das
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).