virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] tcm_vhost lock and flush fix
@ 2013-03-12  2:42 Asias He
  2013-03-12  2:42 ` [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Asias He @ 2013-03-12  2:42 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

Asias He (4):
  tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  tcm_vhost: Introduce tcm_vhost_check_endpoint()
  tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  tcm_vhost: Flush vhost_work in vhost_scsi_flush()

 drivers/vhost/tcm_vhost.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
@ 2013-03-12  2:42 ` Asias He
  2013-03-12 11:13   ` Michael S. Tsirkin
  2013-03-12  2:42 ` [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Asias He @ 2013-03-12  2:42 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

tv_tpg->tv_tpg_vhost_count should be protected by tv_tpg->tv_tpg_mutex.

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 9951297..b3e50d7 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -860,9 +860,11 @@ static int vhost_scsi_clear_endpoint(
 		if (!tv_tpg)
 			continue;
 
+		mutex_lock(&tv_tpg->tv_tpg_mutex);
 		tv_tport = tv_tpg->tport;
 		if (!tv_tport) {
 			ret = -ENODEV;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 
@@ -872,11 +874,13 @@ static int vhost_scsi_clear_endpoint(
 				tv_tport->tport_name, tv_tpg->tport_tpgt,
 				t->vhost_wwpn, t->vhost_tpgt);
 			ret = -EINVAL;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 		tv_tpg->tv_tpg_vhost_count--;
 		vs->vs_tpg[target] = NULL;
 		vs->vs_endpoint = false;
+		mutex_unlock(&tv_tpg->tv_tpg_mutex);
 	}
 	mutex_unlock(&vs->dev.mutex);
 	return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
  2013-03-12  2:42 ` [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
@ 2013-03-12  2:42 ` Asias He
  2013-03-12  8:26   ` Paolo Bonzini
  2013-03-12  2:42 ` [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Asias He @ 2013-03-12  2:42 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

This helper is useful to check if vs->vs_endpoint is setup by
vhost_scsi_set_endpoint()

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b3e50d7..29612bc 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
 	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
 }
 
+static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
+{
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_endpoint)
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
  2013-03-12  2:42 ` [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
  2013-03-12  2:42 ` [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
@ 2013-03-12  2:42 ` Asias He
  2013-03-12 11:11   ` Michael S. Tsirkin
  2013-03-12  2:42 ` [PATCH 4/4] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
  2013-03-12  8:26 ` [PATCH 0/4] tcm_vhost lock and flush fix Paolo Bonzini
  4 siblings, 1 reply; 19+ messages in thread
From: Asias He @ 2013-03-12  2:42 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

vs->vs_endpoint is protected by the vs->dev.mutex. Use
tcm_vhost_check_endpoint() to do check. The helper does the needed
locking for us.

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 29612bc..61093d1 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -594,7 +594,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 	u8 target;
 
 	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
-	if (unlikely(!vs->vs_endpoint))
+	if (!tcm_vhost_check_endpoint(vs))
 		return;
 
 	mutex_lock(&vq->mutex);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/4] tcm_vhost: Flush vhost_work in vhost_scsi_flush()
  2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
                   ` (2 preceding siblings ...)
  2013-03-12  2:42 ` [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
@ 2013-03-12  2:42 ` Asias He
  2013-03-12  8:26 ` [PATCH 0/4] tcm_vhost lock and flush fix Paolo Bonzini
  4 siblings, 0 replies; 19+ messages in thread
From: Asias He @ 2013-03-12  2:42 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

We also need to flush the vhost_works. One is the completion vhost_work, the
other is event vhost_work.

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 61093d1..c5157cd 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -953,6 +953,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 
 	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
 		vhost_scsi_flush_vq(vs, i);
+	vhost_work_flush(&vs->dev, &vs->vs_completion_work);
 }
 
 static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-12  2:42 ` [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
@ 2013-03-12  8:26   ` Paolo Bonzini
  2013-03-13  3:02     ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2013-03-12  8:26 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

Il 12/03/2013 03:42, Asias He ha scritto:
> This helper is useful to check if vs->vs_endpoint is setup by
> vhost_scsi_set_endpoint()
> 
> Signed-off-by: Asias He <asias@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index b3e50d7..29612bc 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
>  }
>  
> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> +{
> +	bool ret = false;
> +
> +	mutex_lock(&vs->dev.mutex);
> +	if (vs->vs_endpoint)
> +		ret = true;
> +	mutex_unlock(&vs->dev.mutex);

The return value is invalid as soon as mutex_unlock is called, i.e.
before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
in the caller while the mutex is taken.

Paolo

> +	return ret;
> +}
> +
>  static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
>  {
>  	return 1;
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] tcm_vhost lock and flush fix
  2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
                   ` (3 preceding siblings ...)
  2013-03-12  2:42 ` [PATCH 4/4] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
@ 2013-03-12  8:26 ` Paolo Bonzini
  2013-03-12 11:12   ` Michael S. Tsirkin
  4 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2013-03-12  8:26 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

Il 12/03/2013 03:42, Asias He ha scritto:
> Asias He (4):
>   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
>   tcm_vhost: Introduce tcm_vhost_check_endpoint()
>   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
>   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> 
>  drivers/vhost/tcm_vhost.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 

Patch 1, 2, 4:

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-12  2:42 ` [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
@ 2013-03-12 11:11   ` Michael S. Tsirkin
  2013-03-13  3:13     ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Michael S. Tsirkin @ 2013-03-12 11:11 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 10:42:50AM +0800, Asias He wrote:
> vs->vs_endpoint is protected by the vs->dev.mutex. Use
> tcm_vhost_check_endpoint() to do check. The helper does the needed
> locking for us.
> 
> Signed-off-by: Asias He <asias@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

This takes dev mutex on data path which will introduce
contention esp for multiqueue.
How about storing the endpoint as part of vq
private data and protecting with vq mutex?

> ---
>  drivers/vhost/tcm_vhost.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index 29612bc..61093d1 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -594,7 +594,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
>  	u8 target;
>  
>  	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
> -	if (unlikely(!vs->vs_endpoint))
> +	if (!tcm_vhost_check_endpoint(vs))
>  		return;
>  
>  	mutex_lock(&vq->mutex);
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] tcm_vhost lock and flush fix
  2013-03-12  8:26 ` [PATCH 0/4] tcm_vhost lock and flush fix Paolo Bonzini
@ 2013-03-12 11:12   ` Michael S. Tsirkin
  2013-03-12 11:14     ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Michael S. Tsirkin @ 2013-03-12 11:12 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, virtualization, target-devel, Stefan Hajnoczi

On Tue, Mar 12, 2013 at 09:26:42AM +0100, Paolo Bonzini wrote:
> Il 12/03/2013 03:42, Asias He ha scritto:
> > Asias He (4):
> >   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
> >   tcm_vhost: Introduce tcm_vhost_check_endpoint()
> >   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
> >   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> > 
> >  drivers/vhost/tcm_vhost.c | 19 ++++++++++++++++++-
> >  1 file changed, 18 insertions(+), 1 deletion(-)
> > 
> 
> Patch 1, 2, 4:
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

How is 2 useful without 3?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  2013-03-12  2:42 ` [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
@ 2013-03-12 11:13   ` Michael S. Tsirkin
  2013-03-13  2:54     ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Michael S. Tsirkin @ 2013-03-12 11:13 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 10:42:48AM +0800, Asias He wrote:
> tv_tpg->tv_tpg_vhost_count should be protected by tv_tpg->tv_tpg_mutex.
> 
> Signed-off-by: Asias He <asias@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  drivers/vhost/tcm_vhost.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index 9951297..b3e50d7 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -860,9 +860,11 @@ static int vhost_scsi_clear_endpoint(
>  		if (!tv_tpg)
>  			continue;
>  
> +		mutex_lock(&tv_tpg->tv_tpg_mutex);
>  		tv_tport = tv_tpg->tport;
>  		if (!tv_tport) {
>  			ret = -ENODEV;
> +			mutex_unlock(&tv_tpg->tv_tpg_mutex);
>  			goto err;
>  		}
>  
> @@ -872,11 +874,13 @@ static int vhost_scsi_clear_endpoint(
>  				tv_tport->tport_name, tv_tpg->tport_tpgt,
>  				t->vhost_wwpn, t->vhost_tpgt);
>  			ret = -EINVAL;
> +			mutex_unlock(&tv_tpg->tv_tpg_mutex);
>  			goto err;
>  		}
>  		tv_tpg->tv_tpg_vhost_count--;
>  		vs->vs_tpg[target] = NULL;
>  		vs->vs_endpoint = false;
> +		mutex_unlock(&tv_tpg->tv_tpg_mutex);
>  	}
>  	mutex_unlock(&vs->dev.mutex);
>  	return 0;

Does the error handling have to be so messy?
How about another label which does unlock and goto there?

> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] tcm_vhost lock and flush fix
  2013-03-12 11:12   ` Michael S. Tsirkin
@ 2013-03-12 11:14     ` Paolo Bonzini
  2013-03-14  2:17       ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2013-03-12 11:14 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel, Stefan Hajnoczi

Il 12/03/2013 12:12, Michael S. Tsirkin ha scritto:
> On Tue, Mar 12, 2013 at 09:26:42AM +0100, Paolo Bonzini wrote:
>> Il 12/03/2013 03:42, Asias He ha scritto:
>>> Asias He (4):
>>>   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
>>>   tcm_vhost: Introduce tcm_vhost_check_endpoint()
>>>   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
>>>   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
>>>
>>>  drivers/vhost/tcm_vhost.c | 19 ++++++++++++++++++-
>>>  1 file changed, 18 insertions(+), 1 deletion(-)
>>
>> Patch 1, 2, 4:
>>
>> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> How is 2 useful without 3?

I can still say I reviewed it and Asias can include the tag in the next
respin.  But I screwed up, patch 2 is exactly the one that I said makes
little sense and should be folded into patch 3.

Paolo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  2013-03-12 11:13   ` Michael S. Tsirkin
@ 2013-03-13  2:54     ` Asias He
  0 siblings, 0 replies; 19+ messages in thread
From: Asias He @ 2013-03-13  2:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 01:13:44PM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 12, 2013 at 10:42:48AM +0800, Asias He wrote:
> > tv_tpg->tv_tpg_vhost_count should be protected by tv_tpg->tv_tpg_mutex.
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  drivers/vhost/tcm_vhost.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index 9951297..b3e50d7 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -860,9 +860,11 @@ static int vhost_scsi_clear_endpoint(
> >  		if (!tv_tpg)
> >  			continue;
> >  
> > +		mutex_lock(&tv_tpg->tv_tpg_mutex);
> >  		tv_tport = tv_tpg->tport;
> >  		if (!tv_tport) {
> >  			ret = -ENODEV;
> > +			mutex_unlock(&tv_tpg->tv_tpg_mutex);
> >  			goto err;
> >  		}
> >  
> > @@ -872,11 +874,13 @@ static int vhost_scsi_clear_endpoint(
> >  				tv_tport->tport_name, tv_tpg->tport_tpgt,
> >  				t->vhost_wwpn, t->vhost_tpgt);
> >  			ret = -EINVAL;
> > +			mutex_unlock(&tv_tpg->tv_tpg_mutex);
> >  			goto err;
> >  		}
> >  		tv_tpg->tv_tpg_vhost_count--;
> >  		vs->vs_tpg[target] = NULL;
> >  		vs->vs_endpoint = false;
> > +		mutex_unlock(&tv_tpg->tv_tpg_mutex);
> >  	}
> >  	mutex_unlock(&vs->dev.mutex);
> >  	return 0;
> 
> Does the error handling have to be so messy?
> How about another label which does unlock and goto there?

Well, I will do it.

> > -- 
> > 1.8.1.4

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-12  8:26   ` Paolo Bonzini
@ 2013-03-13  3:02     ` Asias He
  2013-03-13  8:00       ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Asias He @ 2013-03-13  3:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
> Il 12/03/2013 03:42, Asias He ha scritto:
> > This helper is useful to check if vs->vs_endpoint is setup by
> > vhost_scsi_set_endpoint()
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index b3e50d7..29612bc 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
> >  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
> >  }
> >  
> > +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> > +{
> > +	bool ret = false;
> > +
> > +	mutex_lock(&vs->dev.mutex);
> > +	if (vs->vs_endpoint)
> > +		ret = true;
> > +	mutex_unlock(&vs->dev.mutex);
> 
> The return value is invalid as soon as mutex_unlock is called, i.e.
> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
> in the caller while the mutex is taken.

Do you mean 1) or 2)?

   1)
   vhost_scsi_handle_vq()
   {
   
      mutex_lock(&vs->dev.mutex);
      check vs->vs_endpoint
      mutex_unlock(&vs->dev.mutex);
   
      handle vq
   }
   
   2)
   vhost_scsi_handle_vq()
   {
   
      lock vs->dev.mutex
      check vs->vs_endpoint
      handle vq
      unlock vs->dev.mutex
   }

1) does not make any difference with the original
one right?

2) would be too heavy. This might not be a problem in current 1 thread
per vhost model. But if we want concurrent multiqueue, this will be
killing us.

Anyway, the current one is not good. Need to think.

> Paolo
> 
> > +	return ret;
> > +}
> > +
> >  static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
> >  {
> >  	return 1;
> > 
> 

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-12 11:11   ` Michael S. Tsirkin
@ 2013-03-13  3:13     ` Asias He
  2013-03-13  8:02       ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Asias He @ 2013-03-13  3:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 01:11:19PM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 12, 2013 at 10:42:50AM +0800, Asias He wrote:
> > vs->vs_endpoint is protected by the vs->dev.mutex. Use
> > tcm_vhost_check_endpoint() to do check. The helper does the needed
> > locking for us.
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> This takes dev mutex on data path which will introduce
> contention esp for multiqueue.

Yes, for now it is okay, but for concurrent execution of multiqueue it is
really bad.

By the way, what is the overhead of taking and releasing the
vs->dev.mutex even if no one contents for it? Is this overhead gnorable.

> How about storing the endpoint as part of vq
> private data and protecting with vq mutex?

Hmm, this makes sense, let's see how well it works.

> > ---
> >  drivers/vhost/tcm_vhost.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index 29612bc..61093d1 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -594,7 +594,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
> >  	u8 target;
> >  
> >  	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
> > -	if (unlikely(!vs->vs_endpoint))
> > +	if (!tcm_vhost_check_endpoint(vs))
> >  		return;
> >  
> >  	mutex_lock(&vq->mutex);
> > -- 
> > 1.8.1.4

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-13  3:02     ` Asias He
@ 2013-03-13  8:00       ` Paolo Bonzini
  2013-03-14  2:14         ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2013-03-13  8:00 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

Il 13/03/2013 04:02, Asias He ha scritto:
> On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
>> Il 12/03/2013 03:42, Asias He ha scritto:
>>> This helper is useful to check if vs->vs_endpoint is setup by
>>> vhost_scsi_set_endpoint()
>>>
>>> Signed-off-by: Asias He <asias@redhat.com>
>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>> ---
>>>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
>>>  1 file changed, 12 insertions(+)
>>>
>>> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
>>> index b3e50d7..29612bc 100644
>>> --- a/drivers/vhost/tcm_vhost.c
>>> +++ b/drivers/vhost/tcm_vhost.c
>>> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
>>>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
>>>  }
>>>  
>>> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
>>> +{
>>> +	bool ret = false;
>>> +
>>> +	mutex_lock(&vs->dev.mutex);
>>> +	if (vs->vs_endpoint)
>>> +		ret = true;
>>> +	mutex_unlock(&vs->dev.mutex);
>>
>> The return value is invalid as soon as mutex_unlock is called, i.e.
>> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
>> in the caller while the mutex is taken.
> 
> Do you mean 1) or 2)?
> 
>    1)
>    vhost_scsi_handle_vq()
>    {
>    
>       mutex_lock(&vs->dev.mutex);
>       check vs->vs_endpoint
>       mutex_unlock(&vs->dev.mutex);
>    
>       handle vq
>    }
>    
>    2)
>    vhost_scsi_handle_vq()
>    {
>    
>       lock vs->dev.mutex
>       check vs->vs_endpoint
>       handle vq
>       unlock vs->dev.mutex
>    }
> 
> 1) does not make any difference with the original
> one right?

Yes, it's just what you have with tcm_vhost_check_endpoint inlined.

> 2) would be too heavy. This might not be a problem in current 1 thread
> per vhost model. But if we want concurrent multiqueue, this will be
> killing us.

I mean (2).  You could use an rwlock to enable more concurrency.

Paolo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-13  3:13     ` Asias He
@ 2013-03-13  8:02       ` Paolo Bonzini
  2013-03-14  2:12         ` Asias He
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2013-03-13  8:02 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

Il 13/03/2013 04:13, Asias He ha scritto:
>> > This takes dev mutex on data path which will introduce
>> > contention esp for multiqueue.
> Yes, for now it is okay, but for concurrent execution of multiqueue it is
> really bad.
> 
> By the way, what is the overhead of taking and releasing the
> vs->dev.mutex even if no one contents for it? Is this overhead gnorable.

There is a possibility of cacheline ping-pong, but apart from that it's
ignorable.

>> > How about storing the endpoint as part of vq
>> > private data and protecting with vq mutex?
> 
> Hmm, this makes sense, let's see how well it works.

Then VHOST_SCSI_SET_ENDPOINT would have to go through all vqs, no?  A
rwlock seems simpler.

Paolo

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-13  8:02       ` Paolo Bonzini
@ 2013-03-14  2:12         ` Asias He
  0 siblings, 0 replies; 19+ messages in thread
From: Asias He @ 2013-03-14  2:12 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

On Wed, Mar 13, 2013 at 09:02:38AM +0100, Paolo Bonzini wrote:
> Il 13/03/2013 04:13, Asias He ha scritto:
> >> > This takes dev mutex on data path which will introduce
> >> > contention esp for multiqueue.
> > Yes, for now it is okay, but for concurrent execution of multiqueue it is
> > really bad.
> > 
> > By the way, what is the overhead of taking and releasing the
> > vs->dev.mutex even if no one contents for it? Is this overhead gnorable.
> 
> There is a possibility of cacheline ping-pong, but apart from that it's
> ignorable.

Ah, thanks!

> >> > How about storing the endpoint as part of vq
> >> > private data and protecting with vq mutex?
> > 
> > Hmm, this makes sense, let's see how well it works.
> 
> Then VHOST_SCSI_SET_ENDPOINT would have to go through all vqs, no?  A
> rwlock seems simpler.

VHOST_SCSI_SET_ENDPOINT operation is not on the data path, it
is fine to go through all vqs, it's just a loop.

For the rwlock thing, let's discuss it on the other thread.

> Paolo

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-13  8:00       ` Paolo Bonzini
@ 2013-03-14  2:14         ` Asias He
  0 siblings, 0 replies; 19+ messages in thread
From: Asias He @ 2013-03-14  2:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

On Wed, Mar 13, 2013 at 09:00:43AM +0100, Paolo Bonzini wrote:
> Il 13/03/2013 04:02, Asias He ha scritto:
> > On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
> >> Il 12/03/2013 03:42, Asias He ha scritto:
> >>> This helper is useful to check if vs->vs_endpoint is setup by
> >>> vhost_scsi_set_endpoint()
> >>>
> >>> Signed-off-by: Asias He <asias@redhat.com>
> >>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> >>> ---
> >>>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
> >>>  1 file changed, 12 insertions(+)
> >>>
> >>> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> >>> index b3e50d7..29612bc 100644
> >>> --- a/drivers/vhost/tcm_vhost.c
> >>> +++ b/drivers/vhost/tcm_vhost.c
> >>> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
> >>>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
> >>>  }
> >>>  
> >>> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> >>> +{
> >>> +	bool ret = false;
> >>> +
> >>> +	mutex_lock(&vs->dev.mutex);
> >>> +	if (vs->vs_endpoint)
> >>> +		ret = true;
> >>> +	mutex_unlock(&vs->dev.mutex);
> >>
> >> The return value is invalid as soon as mutex_unlock is called, i.e.
> >> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
> >> in the caller while the mutex is taken.
> > 
> > Do you mean 1) or 2)?
> > 
> >    1)
> >    vhost_scsi_handle_vq()
> >    {
> >    
> >       mutex_lock(&vs->dev.mutex);
> >       check vs->vs_endpoint
> >       mutex_unlock(&vs->dev.mutex);
> >    
> >       handle vq
> >    }
> >    
> >    2)
> >    vhost_scsi_handle_vq()
> >    {
> >    
> >       lock vs->dev.mutex
> >       check vs->vs_endpoint
> >       handle vq
> >       unlock vs->dev.mutex
> >    }
> > 
> > 1) does not make any difference with the original
> > one right?
> 
> Yes, it's just what you have with tcm_vhost_check_endpoint inlined.

okay.

> > 2) would be too heavy. This might not be a problem in current 1 thread
> > per vhost model. But if we want concurrent multiqueue, this will be
> > killing us.
> 
> I mean (2).  You could use an rwlock to enable more concurrency.

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] tcm_vhost lock and flush fix
  2013-03-12 11:14     ` Paolo Bonzini
@ 2013-03-14  2:17       ` Asias He
  0 siblings, 0 replies; 19+ messages in thread
From: Asias He @ 2013-03-14  2:17 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi

On Tue, Mar 12, 2013 at 12:14:56PM +0100, Paolo Bonzini wrote:
> Il 12/03/2013 12:12, Michael S. Tsirkin ha scritto:
> > On Tue, Mar 12, 2013 at 09:26:42AM +0100, Paolo Bonzini wrote:
> >> Il 12/03/2013 03:42, Asias He ha scritto:
> >>> Asias He (4):
> >>>   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
> >>>   tcm_vhost: Introduce tcm_vhost_check_endpoint()
> >>>   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
> >>>   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> >>>
> >>>  drivers/vhost/tcm_vhost.c | 19 ++++++++++++++++++-
> >>>  1 file changed, 18 insertions(+), 1 deletion(-)
> >>
> >> Patch 1, 2, 4:
> >>
> >> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > 
> > How is 2 useful without 3?
> 
> I can still say I reviewed it and Asias can include the tag in the next
> respin.  But I screwed up, patch 2 is exactly the one that I said makes
> little sense and should be folded into patch 3.

Thanks, forgot to include your Reviewed-by tag in v2. Including it. And
2 and 3 are folded.

-- 
Asias

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2013-03-14  2:17 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-12  2:42 [PATCH 0/4] tcm_vhost lock and flush fix Asias He
2013-03-12  2:42 ` [PATCH 1/4] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
2013-03-12 11:13   ` Michael S. Tsirkin
2013-03-13  2:54     ` Asias He
2013-03-12  2:42 ` [PATCH 2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
2013-03-12  8:26   ` Paolo Bonzini
2013-03-13  3:02     ` Asias He
2013-03-13  8:00       ` Paolo Bonzini
2013-03-14  2:14         ` Asias He
2013-03-12  2:42 ` [PATCH 3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
2013-03-12 11:11   ` Michael S. Tsirkin
2013-03-13  3:13     ` Asias He
2013-03-13  8:02       ` Paolo Bonzini
2013-03-14  2:12         ` Asias He
2013-03-12  2:42 ` [PATCH 4/4] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
2013-03-12  8:26 ` [PATCH 0/4] tcm_vhost lock and flush fix Paolo Bonzini
2013-03-12 11:12   ` Michael S. Tsirkin
2013-03-12 11:14     ` Paolo Bonzini
2013-03-14  2:17       ` Asias He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).