cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [GFS2 PATCH] GFS2: Rework gfs2_logd daemon
Date: Mon, 3 Jul 2017 10:32:09 +0100	[thread overview]
Message-ID: <2d222be3-0aec-8bfc-8139-e10ad31ebe34@redhat.com> (raw)
In-Reply-To: <995075348.27804274.1498841138568.JavaMail.zimbra@redhat.com>

Hi,


On 30/06/17 17:45, Bob Peterson wrote:
> Hi,
>
> This patch reorganizes some of the hokey logic in gfs2_logd.
> It also tries to only fetch the logd_secs tunable only once per
> second to avoid too many spinlock conflicts.
>
> Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Again, please explain what the benefit is. The tunable should only every 
be accessed by gfs2_logd and by a userland process that is updating the 
tunable. If there is contention here, something is very wrong. The patch 
appears to remove the check for whether a journal or ail flush is 
required before gfs2_logd sleeps, so this appears that it would likely 
make performance worse,

Steve.

> ---
> diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
> index 32aa1f0..3f18db4 100644
> --- a/fs/gfs2/log.c
> +++ b/fs/gfs2/log.c
> @@ -915,15 +915,12 @@ int gfs2_logd(void *data)
>   	struct gfs2_sbd *sdp = data;
>   	unsigned long t = 1;
>   	DEFINE_WAIT(wait);
> -	bool did_flush;
> +	unsigned long tune_time = 0;
>   
>   	while (!kthread_should_stop()) {
> -
> -		did_flush = false;
>   		if (gfs2_jrnl_flush_reqd(sdp) || t == 0) {
>   			gfs2_ail1_empty(sdp);
>   			gfs2_log_flush(sdp, NULL, NORMAL_FLUSH);
> -			did_flush = true;
>   		}
>   
>   		if (gfs2_ail_flush_reqd(sdp)) {
> @@ -931,26 +928,22 @@ int gfs2_logd(void *data)
>   			gfs2_ail1_wait(sdp);
>   			gfs2_ail1_empty(sdp);
>   			gfs2_log_flush(sdp, NULL, NORMAL_FLUSH);
> -			did_flush = true;
>   		}
>   
> -		if (!gfs2_ail_flush_reqd(sdp) || did_flush)
> -			wake_up(&sdp->sd_log_waitq);
> -
> -		t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
> +		wake_up(&sdp->sd_log_waitq);
>   
>   		try_to_freeze();
>   
> -		do {
> -			prepare_to_wait(&sdp->sd_logd_waitq, &wait,
> -					TASK_INTERRUPTIBLE);
> -			if (!gfs2_ail_flush_reqd(sdp) &&
> -			    !gfs2_jrnl_flush_reqd(sdp) &&
> -			    !kthread_should_stop())
> -				t = schedule_timeout(t);
> -		} while(t && !gfs2_ail_flush_reqd(sdp) &&
> -			!gfs2_jrnl_flush_reqd(sdp) &&
> -			!kthread_should_stop());
> +		if (kthread_should_stop())
> +			break;
> +
> +		if (time_after(jiffies, tune_time + HZ)) {
> +			t = gfs2_tune_get(sdp, gt_logd_secs) * HZ;
> +			tune_time = jiffies;
> +		}
> +		prepare_to_wait(&sdp->sd_logd_waitq, &wait,
> +				TASK_INTERRUPTIBLE);
> +		t = schedule_timeout(t);
>   		finish_wait(&sdp->sd_logd_waitq, &wait);
>   	}
>   



      reply	other threads:[~2017-07-03  9:32 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1699034289.27804170.1498841087833.JavaMail.zimbra@redhat.com>
2017-06-30 16:45 ` [Cluster-devel] [GFS2 PATCH] GFS2: Rework gfs2_logd daemon Bob Peterson
2017-07-03  9:32   ` Steven Whitehouse [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d222be3-0aec-8bfc-8139-e10ad31ebe34@redhat.com \
    --to=swhiteho@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).