virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Greg KH <gregkh@suse.de>
To: Haiyang Zhang <haiyangz@microsoft.com>
Cc: "'linux-kernel@vger.kernel.org'" <linux-kernel@vger.kernel.org>,
	"'devel@driverdev.osuosl.org'" <devel@driverdev.osuosl.org>,
	"'virtualization@lists.osdl.org'" <virtualization@lists.osdl.org>,
	Hank Janssen <hjanssen@microsoft.com>
Subject: Re: [PATCH 1/2] staging: hv: Fix race condition in hv_utils module initialization.
Date: Wed, 19 May 2010 09:10:11 -0700	[thread overview]
Message-ID: <20100519161011.GA20266@suse.de> (raw)
In-Reply-To: <1FB5E1D5CA062146B38059374562DF7266B8930E@TK5EX14MBXC128.redmond.corp.microsoft.com>

On Wed, May 19, 2010 at 03:54:17PM +0000, Haiyang Zhang wrote:
> From: Haiyang Zhang <haiyangz@microsoft.com>
> 
> Subject: [PATCH 1/2] staging: hv: Fix race condition in hv_utils module initialization.
> There is a possible race condition when hv_utils starts to load immediately
> after hv_vmbus is loading - null pointer error could happen.
> This patch added an atomic counter to ensure the hv_utils module initialization
> happens after all vmbus IC channels are initialized.
> 
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> Signed-off-by: Hank Janssen <hjanssen@microsoft.com>
> 
> ---
>  drivers/staging/hv/channel_mgmt.c |   26 +++++++++++++++-----------
>  drivers/staging/hv/hv_utils.c     |   11 ++++++++---
>  drivers/staging/hv/utils.h        |    5 +++++
>  3 files changed, 28 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/staging/hv/channel_mgmt.c b/drivers/staging/hv/channel_mgmt.c
> index 3f53b4d..b5b6a70 100644
> --- a/drivers/staging/hv/channel_mgmt.c
> +++ b/drivers/staging/hv/channel_mgmt.c
> @@ -33,7 +33,6 @@ struct vmbus_channel_message_table_entry {
>  	void (*messageHandler)(struct vmbus_channel_message_header *msg);
>  };
>  
> -#define MAX_MSG_TYPES                    3
>  #define MAX_NUM_DEVICE_CLASSES_SUPPORTED 7
>  
>  static const struct hv_guid
> @@ -233,6 +232,11 @@ struct hyperv_service_callback hv_cb_utils[MAX_MSG_TYPES] = {
>  };
>  EXPORT_SYMBOL(hv_cb_utils);
>  
> +/* Counter of IC channels initialized */
> +atomic_t hv_utils_initcnt = ATOMIC_INIT(0);
> +EXPORT_SYMBOL(hv_utils_initcnt);
> +
> +
>  /*
>   * AllocVmbusChannel - Allocate and initialize a vmbus channel object
>   */
> @@ -373,22 +377,22 @@ static void VmbusChannelProcessOffer(void *context)
>  		 * can cleanup properly
>  		 */
>  		newChannel->State = CHANNEL_OPEN_STATE;
> -		cnt = 0;
>  
> -		while (cnt != MAX_MSG_TYPES) {
> +		/* Open IC channels */
> +		for (cnt = 0; cnt < MAX_MSG_TYPES; cnt++) {
>  			if (memcmp(&newChannel->OfferMsg.Offer.InterfaceType,
>  				   &hv_cb_utils[cnt].data,
> -				   sizeof(struct hv_guid)) == 0) {
> +				   sizeof(struct hv_guid)) == 0 &&
> +			    VmbusChannelOpen(newChannel, 2 * PAGE_SIZE,
> +					     2 * PAGE_SIZE, NULL, 0,
> +					     hv_cb_utils[cnt].callback,
> +					     newChannel) == 0) {
> +				hv_cb_utils[cnt].channel = newChannel;
> +				mb();
>  				DPRINT_INFO(VMBUS, "%s",
>  					    hv_cb_utils[cnt].log_msg);
> -
> -				if (VmbusChannelOpen(newChannel, 2 * PAGE_SIZE,
> -						    2 * PAGE_SIZE, NULL, 0,
> -						    hv_cb_utils[cnt].callback,
> -						    newChannel) == 0)
> -					hv_cb_utils[cnt].channel = newChannel;
> +				atomic_inc(&hv_utils_initcnt);
>  			}
> -			cnt++;
>  		}
>  	}
>  	DPRINT_EXIT(VMBUS);
> diff --git a/drivers/staging/hv/hv_utils.c b/drivers/staging/hv/hv_utils.c
> index 8a49aaf..f9826c7 100644
> --- a/drivers/staging/hv/hv_utils.c
> +++ b/drivers/staging/hv/hv_utils.c
> @@ -253,7 +253,11 @@ static void heartbeat_onchannelcallback(void *context)
>  
>  static int __init init_hyperv_utils(void)
>  {
> -	printk(KERN_INFO "Registering HyperV Utility Driver\n");
> +	printk(KERN_INFO "Registering HyperV Utility Driver...\n");
> +
> +	/* Wait until all IC channels are initialized */
> +	while (atomic_read(&hv_utils_initcnt) < MAX_MSG_TYPES)
> +		msleep(100);

No, don't do this here, do something in your hv_vmbus core to handle
registering sub-drivers properly.  Perhaps you need to sleep there
before you can succeed on a initialization.

>  
>  	hv_cb_utils[HV_SHUTDOWN_MSG].channel->OnChannelCallback =
>  		&shutdown_onchannelcallback;

The problem is that you just have a bunch of callbacks you are setting
up, it's not a "real" function call.  Please change it over to a
function call, like all other subsystems have.  Then, you can handle any
"sleep until we are set up properly" issues in the vmbus code, not in
each and every individual bus driver.

thanks,

greg k-h

       reply	other threads:[~2010-05-19 16:10 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1FB5E1D5CA062146B38059374562DF7266B8930E@TK5EX14MBXC128.redmond.corp.microsoft.com>
2010-05-19 16:10 ` Greg KH [this message]
2010-05-19 20:30   ` [PATCH 1/2] staging: hv: Fix race condition in hv_utils module initialization Haiyang Zhang
2010-05-19 20:39     ` Greg KH
2010-05-19 22:12       ` Haiyang Zhang
2010-05-19 22:21         ` Greg KH
2010-05-19 15:54 Haiyang Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100519161011.GA20266@suse.de \
    --to=gregkh@suse.de \
    --cc=devel@driverdev.osuosl.org \
    --cc=haiyangz@microsoft.com \
    --cc=hjanssen@microsoft.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=virtualization@lists.osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).