cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH 02/17] DLM: Eliminate CF_WRITE_PENDING flag
Date: Fri, 18 Aug 2017 10:01:26 +0100	[thread overview]
Message-ID: <3151ac72-460d-b1ab-b36c-b308a547c4a0@redhat.com> (raw)
In-Reply-To: <0dfa2274020742cf94e4a4f3dac0630e@TGXML394.toshiba.local>

Hi,


On 18/08/17 00:38, tsutomu.owa at toshiba.co.jp wrote:
> Hi, thank you for your review.
>
>>> +	cond_resched();
>>> +	queue_work(send_workqueue, &con->swork);
>> I think it would make more sense to call cond_resched() after the
>> queue_work() since we want the queued work to run soon after it has been
>> queued I think,
> Well, we're fine with that order.
> Could it be better to ask Bob Peterson <rpeterso@redhat.com> who wrote this patch?
>
> thanks,
> -- owa
I'm sure it will work with either ordering. The queue_work() call will  
schedule a task to at a later date, so it makes more sense to put the  
cond_resched() after that, since it may then yield the cpu to the newly  
created task immediately and thus reduce latency. If the cond_resched()  
call is first then it will need to do an additional loop before the cpu  
is given up to the newly created work function.

The difference in latency might not be easily measurable, but logically  
it makes more sense to consider releasing the cpu to another task just  
after creating a new work item, than just before it,

Steve.

>
> -----Original Message-----
> From: Steven Whitehouse [mailto:swhiteho at redhat.com]
> Sent: Wednesday, August 9, 2017 8:15 PM
> To: owa tsutomu(?? ? ??? ?????????????); cluster-devel at redhat.com
> Cc: miyauchi tadashi(?? ?? ???? ?????????)
> Subject: Re: [Cluster-devel] [PATCH 02/17] DLM: Eliminate CF_WRITE_PENDING flag
>
> Hi,
>
>
> On 09/08/17 06:49, tsutomu.owa at toshiba.co.jp wrote:
>> From: Bob Peterson <rpeterso@redhat.com>
>>
>> Before this patch the CF_WRITE_PENDING flag was used to indicate
>> when writes to the socket were pending. This caused race conditions
>> whereby one process set the bit and another cleared it. Instead,
>> we just check to see if there's anything there to be sent. This
>> makes the code more intuitive and bullet-proof.
>>
>> Signed-off-by: Bob Peterson <rpeterso@redhat.com>
>> Reviewed-by: Tadashi Miyauchi <miyauchi@toshiba-tops.co.jp>
>>
>> ---
>>    fs/dlm/lowcomms.c | 21 ++++++++-------------
>>    1 file changed, 8 insertions(+), 13 deletions(-)
>>
>> diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
>> index 41bf93a..a9b2483 100644
>> --- a/fs/dlm/lowcomms.c
>> +++ b/fs/dlm/lowcomms.c
>> @@ -106,7 +106,6 @@ struct connection {
>>    	struct mutex sock_mutex;
>>    	unsigned long flags;
>>    #define CF_READ_PENDING 1
>> -#define CF_WRITE_PENDING 2
>>    #define CF_INIT_PENDING 4
>>    #define CF_IS_OTHERCON 5
>>    #define CF_CLOSE 6
>> @@ -426,8 +425,7 @@ static void lowcomms_write_space(struct sock *sk)
>>    		clear_bit(SOCKWQ_ASYNC_NOSPACE, &con->sock->flags);
>>    	}
>>    
>> -	if (!test_and_set_bit(CF_WRITE_PENDING, &con->flags))
>> -		queue_work(send_workqueue, &con->swork);
>> +	queue_work(send_workqueue, &con->swork);
>>    }
>>    
>>    static inline void lowcomms_connect_sock(struct connection *con)
>> @@ -578,7 +576,6 @@ static void make_sockaddr(struct sockaddr_storage *saddr, uint16_t port,
>>    static void close_connection(struct connection *con, bool and_other,
>>    			     bool tx, bool rx)
>>    {
>> -	clear_bit(CF_WRITE_PENDING, &con->flags);
>>    	if (tx && cancel_work_sync(&con->swork))
>>    		log_print("canceled swork for node %d", con->nodeid);
>>    	if (rx && cancel_work_sync(&con->rwork))
>> @@ -1077,7 +1074,6 @@ static void sctp_connect_to_sock(struct connection *con)
>>    	if (result == 0)
>>    		goto out;
>>    
>> -
>>    bind_err:
>>    	con->sock = NULL;
>>    	sock_release(sock);
>> @@ -1102,7 +1098,6 @@ static void sctp_connect_to_sock(struct connection *con)
>>    
>>    out:
>>    	mutex_unlock(&con->sock_mutex);
>> -	set_bit(CF_WRITE_PENDING, &con->flags);
>>    }
>>    
>>    /* Connect a new socket to its peer */
>> @@ -1196,7 +1191,6 @@ static void tcp_connect_to_sock(struct connection *con)
>>    	}
>>    out:
>>    	mutex_unlock(&con->sock_mutex);
>> -	set_bit(CF_WRITE_PENDING, &con->flags);
>>    	return;
>>    }
>>    
>> @@ -1452,9 +1446,7 @@ void dlm_lowcomms_commit_buffer(void *mh)
>>    	e->len = e->end - e->offset;
>>    	spin_unlock(&con->writequeue_lock);
>>    
>> -	if (!test_and_set_bit(CF_WRITE_PENDING, &con->flags)) {
>> -		queue_work(send_workqueue, &con->swork);
>> -	}
>> +	queue_work(send_workqueue, &con->swork);
>>    	return;
>>    
>>    out:
>> @@ -1524,12 +1516,15 @@ static void send_to_sock(struct connection *con)
>>    send_error:
>>    	mutex_unlock(&con->sock_mutex);
>>    	close_connection(con, false, false, true);
>> -	lowcomms_connect_sock(con);
>> +	/* Requeue the send work. When the work daemon runs again, it will try
>> +	   a new connection, then call this function again. */
>> +	queue_work(send_workqueue, &con->swork);
>>    	return;
>>    
>>    out_connect:
>>    	mutex_unlock(&con->sock_mutex);
>> -	lowcomms_connect_sock(con);
>> +	cond_resched();
>> +	queue_work(send_workqueue, &con->swork);
> I think it would make more sense to call cond_resched() after the
> queue_work() since we want the queued work to run soon after it has been
> queued I think,
>
> Steve.
>
>>    }
>>    
>>    static void clean_one_writequeue(struct connection *con)
>> @@ -1591,7 +1586,7 @@ static void process_send_sockets(struct work_struct *work)
>>    
>>    	if (con->sock == NULL) /* not mutex protected so check it inside too */
>>    		con->connect_action(con);
>> -	if (test_and_clear_bit(CF_WRITE_PENDING, &con->flags))
>> +	if (!list_empty(&con->writequeue))
>>    		send_to_sock(con);
>>    }
>>    
>
>



  reply	other threads:[~2017-08-18  9:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-09  5:49 [Cluster-devel] [PATCH 02/17] DLM: Eliminate CF_WRITE_PENDING flag tsutomu.owa
2017-08-09 11:15 ` Steven Whitehouse
2017-08-17 23:38   ` tsutomu.owa
2017-08-18  9:01     ` Steven Whitehouse [this message]
2017-08-25 18:12     ` Bob Peterson
2017-08-28  0:43       ` tsutomu.owa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3151ac72-460d-b1ab-b36c-b308a547c4a0@redhat.com \
    --to=swhiteho@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).