xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH V2] libxenstore: filter watch events in libxenstore when we unwatch
Date: Wed, 26 Sep 2012 17:24:03 +0100	[thread overview]
Message-ID: <50632C23.3080200@citrix.com> (raw)
In-Reply-To: <20579.6438.538065.533618@mariner.uk.xensource.com>

On 09/26/2012 04:03 PM, Ian Jackson wrote:

> Julien Grall writes ("[PATCH V2] libxenstore: filter watch events in libxenstore when we unwatch"):
>> +
>> +/* Clear the pipe token if there are no more pending watchs.
>> + * We suppose the watch_mutex is already taken.
>> + */
>> +static void xs_clear_watch_pipe(struct xs_handle *h)
> 
> I think this would be better called xs_maybe_clear_watch_pipe or
> something.  Since it doesn't always clear the watch pipe, it makes the
> call sites confusing.

I will fix on the next patch version.

>> @@ -855,14 +864,62 @@ char **xs_read_watch(struct xs_handle *h, unsigned int *num)
>>  bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
>>  {
>>  	struct iovec iov[2];
>> +	struct xs_stored_msg *msg, *tmsg;
>> +	bool res;
>> +	char *s, *p;
>> +	unsigned int i;
>> +	char *l_token, *l_path;
>>  
>>  	iov[0].iov_base = (char *)path;
>>  	iov[0].iov_len = strlen(path) + 1;
>>  	iov[1].iov_base = (char *)token;
>>  	iov[1].iov_len = strlen(token) + 1;
>>  
>> -	return xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
>> -				ARRAY_SIZE(iov), NULL));
>> +	res = xs_bool(xs_talkv(h, XBT_NULL, XS_UNWATCH, iov,
>> +						   ARRAY_SIZE(iov), NULL));
>> +
>> +	/* Filter the watch list to remove potential message */
>> +	mutex_lock(&h->watch_mutex);
>> +
>> +	if (list_empty(&h->watch_list)) {
>> +		mutex_unlock(&h->watch_mutex);
>> +		return res;
>> +	}
> 
> I still think this is redundant, so it should not be here.


The piece of code I moved in the new function expects to read one
character in the pipe. So if the pipe is empty it will either block (if
the file descriptor is not set to NONBLOCK) either loop as read return 0
or -1.

This code is definitely usefull if we want to avoid this annoying
problem. Perhaps we should modify the behavior of this piece of code.

>> +	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
>> +		assert(msg->hdr.type == XS_WATCH_EVENT);
>> +
>> +		s = msg->body;
>> +
>> +		l_token = NULL;
>> +		l_path = NULL;
>> +
>> +		for (p = s, i = 0; p < msg->body + msg->hdr.len; p++) {
>> +			if (*p == '\0')
>> +			{
>> +				if (i == XS_WATCH_TOKEN)
>> +					l_token = s;
>> +				else if (i == XS_WATCH_PATH)
>> +					l_path = s;
>> +				i++;
>> +				s = p + 1;
>> +			}
>> +		}
>> +
>> +		if (l_token && !strcmp(token, l_token)
>> +			/* Use strncmp because we can have a watch fired on sub-directory */
> 
> Oh bum.  I see a problem.  This is quite a bad problem.
> 
> It is legal to do this:
> 
>    client:
> 
>      XS_WATCH /foo token1
>      XS_WATCH /foo/bar token1
> 
> Now if you get a watch event you get the token and the actual path.
> So suppose we have, simultaneously:
> 
>    our client:                          another client:
>      XS_UNWATCH /foo/bar token1          WRITE /foo/bar/zonk sponge
> 
> Then xenstored will generate two watch events:
>                                 WATCH_EVENT /foo/bar/zonk token1
>                                 WATCH_EVENT /foo/bar/zonk token1
> 
> With your patch, both of these would be thrown away.  Whereas in fact
> one of them should still be presented.
> 
> How confident are we that there are no clients which rely on this not
> changing ?
>
> At the very least this needs a documentation patch explaining the
> undesirable consequences of setting multiple watches with the same
> token.  Arguably it needs a flag to the library (or a new function
> name) to ask for this new behaviour.
> 
> I would have to think, for example, about whether the new libxl event
> sub-library would make any mistakes as a result of this proposed
> change.  I think it wouldn't...


If a client decides to use the same token for multiple path (including
one which is the sub-directory), it's not possible to know which
WATCH_EVENT we need to remove.

The current problem that the patch tries to resolve is mainly for
clients who use token with a pointer inside (as QEMU made).

Adding a flag to xs_open could be a good solution as:
#define XS_UNWATCH_SAFE 1UL<<2

If the flag is not enabled, xs_unwatch has the same (buggy?) behaviour
as before. When it's enabled, xs_unwatch will browse the watch list and
will remove spurious watch event.

What do you think?

Sincerely yours,

-- 
Julien Grall

      reply	other threads:[~2012-09-26 16:24 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-25 11:33 [PATCH V2] libxenstore: filter watch events in libxenstore when we unwatch Julien Grall
2012-09-26 15:03 ` Ian Jackson
2012-09-26 16:24   ` Julien Grall [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50632C23.3080200@citrix.com \
    --to=julien.grall@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).