cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Andreas Gruenbacher <agruenba@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH 03/12] gfs2: Minor gfs2_write_revokes cleanups
Date: Mon, 14 Dec 2020 09:54:33 +0100	[thread overview]
Message-ID: <20201214085442.45467-4-agruenba@redhat.com> (raw)
In-Reply-To: <20201214085442.45467-1-agruenba@redhat.com>

Clean up the computations in gfs2_write_revokes (no change in functionality).

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
 fs/gfs2/log.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index 2e9314091c81..c65fdb1a30a0 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -712,11 +712,12 @@ void gfs2_glock_remove_revoke(struct gfs2_glock *gl)
 void gfs2_write_revokes(struct gfs2_sbd *sdp)
 {
 	/* number of revokes we still have room for */
-	int max_revokes = (sdp->sd_sb.sb_bsize - sizeof(struct gfs2_log_descriptor)) / sizeof(u64);
+	unsigned int max_revokes = sdp->sd_ldptrs;
 
 	gfs2_log_lock(sdp);
-	while (sdp->sd_log_num_revoke > max_revokes)
-		max_revokes += (sdp->sd_sb.sb_bsize - sizeof(struct gfs2_meta_header)) / sizeof(u64);
+	if (sdp->sd_log_num_revoke > sdp->sd_ldptrs)
+		max_revokes += roundup(sdp->sd_log_num_revoke - sdp->sd_ldptrs,
+				       sdp->sd_inptrs);
 	max_revokes -= sdp->sd_log_num_revoke;
 	if (!sdp->sd_log_num_revoke) {
 		atomic_dec(&sdp->sd_log_blks_free);
-- 
2.26.2



  parent reply	other threads:[~2020-12-14  8:54 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-14  8:54 [Cluster-devel] [RFC PATCH 00/12] Some log space management cleanups Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 01/12] gfs2: Deobfuscate function jdesc_find_i Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 02/12] gfs2: Simplify the buf_limit and databuf_limit definitions Andreas Gruenbacher
2020-12-14  8:54 ` Andreas Gruenbacher [this message]
2020-12-14  8:54 ` [Cluster-devel] [PATCH 04/12] gfs2: Some documentation clarifications Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 05/12] gfs2: A minor debugging improvement Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 06/12] gfs2: Clean up ail2_empty Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 07/12] gfs2: Get rid of on-stack transactions Andreas Gruenbacher
2020-12-14 14:02   ` Bob Peterson
2020-12-14 14:05     ` Steven Whitehouse
2020-12-14 17:08       ` Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 08/12] gfs2: Get rid of sd_reserving_log Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 09/12] gfs2: Move lock flush locking to gfs2_trans_{begin, end} Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 10/12] gfs2: Don't wait for journal flush in clean_journal Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 11/12] gfs2: Clean up gfs2_log_reserve Andreas Gruenbacher
2020-12-14  8:54 ` [Cluster-devel] [PATCH 12/12] gfs2: Use a tighter bound in gfs2_trans_begin Andreas Gruenbacher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201214085442.45467-4-agruenba@redhat.com \
    --to=agruenba@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).