virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix
@ 2013-03-06  6:16 Asias He
  2013-03-06  6:16 ` [PATCH 1/5] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

Asias He (5):
  tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  tcm_vhost: Introduce tcm_vhost_check_feature()
  tcm_vhost: Introduce tcm_vhost_check_endpoint()
  tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  tcm_vhost: Add hotplug/hotunplug support

 drivers/vhost/tcm_vhost.c | 238 ++++++++++++++++++++++++++++++++++++++++++++--
 drivers/vhost/tcm_vhost.h |  10 ++
 2 files changed, 242 insertions(+), 6 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
@ 2013-03-06  6:16 ` Asias He
  2013-03-06  6:16 ` [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

tv_tpg->tv_tpg_vhost_count should be protected by tv_tpg->tv_tpg_mutex.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 9951297..b3e50d7 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -860,9 +860,11 @@ static int vhost_scsi_clear_endpoint(
 		if (!tv_tpg)
 			continue;
 
+		mutex_lock(&tv_tpg->tv_tpg_mutex);
 		tv_tport = tv_tpg->tport;
 		if (!tv_tport) {
 			ret = -ENODEV;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 
@@ -872,11 +874,13 @@ static int vhost_scsi_clear_endpoint(
 				tv_tport->tport_name, tv_tpg->tport_tpgt,
 				t->vhost_wwpn, t->vhost_tpgt);
 			ret = -EINVAL;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 		tv_tpg->tv_tpg_vhost_count--;
 		vs->vs_tpg[target] = NULL;
 		vs->vs_endpoint = false;
+		mutex_unlock(&tv_tpg->tv_tpg_mutex);
 	}
 	mutex_unlock(&vs->dev.mutex);
 	return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature()
  2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
  2013-03-06  6:16 ` [PATCH 1/5] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
@ 2013-03-06  6:16 ` Asias He
  2013-03-06  9:06   ` Stefan Hajnoczi
  2013-03-06  6:16 ` [PATCH 3/5] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

This helper is useful to check if a feature is supported.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b3e50d7..fdbf986 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -91,6 +91,20 @@ static int iov_num_pages(struct iovec *iov)
 	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
 }
 
+static bool tcm_vhost_check_feature(struct vhost_scsi *vs, u64 feature)
+{
+	u64 acked_features;
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	acked_features = vs->dev.acked_features;
+	if (acked_features & 1ULL << feature)
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
  2013-03-06  6:16 ` [PATCH 1/5] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
  2013-03-06  6:16 ` [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
@ 2013-03-06  6:16 ` Asias He
  2013-03-06  6:16 ` [PATCH 4/5] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

This helper is useful to check if vs->vs_endpoint is setup by
vhost_scsi_set_endpoint().

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index fdbf986..ff56b1d 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -105,6 +105,18 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, u64 feature)
 	return ret;
 }
 
+static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
+{
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_endpoint)
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
                   ` (2 preceding siblings ...)
  2013-03-06  6:16 ` [PATCH 3/5] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
@ 2013-03-06  6:16 ` Asias He
  2013-03-06  6:16 ` [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support Asias He
       [not found] ` <1362550590-3534-6-git-send-email-asias@redhat.com>
  5 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

vs->vs_endpoint is protected by the vs->dev.mutex. Use
tcm_vhost_check_endpoint() to do check. The helper does the needed
locking for us.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index ff56b1d..f9daf39 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -608,7 +608,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 	u8 target;
 
 	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
-	if (unlikely(!vs->vs_endpoint))
+	if (!tcm_vhost_check_endpoint(vs))
 		return;
 
 	mutex_lock(&vq->mutex);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
                   ` (3 preceding siblings ...)
  2013-03-06  6:16 ` [PATCH 4/5] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
@ 2013-03-06  6:16 ` Asias He
       [not found] ` <1362550590-3534-6-git-send-email-asias@redhat.com>
  5 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-06  6:16 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for
virtio-scsi), hotplug support is added to virtio-scsi.

This patch adds hotplug and hotunplug support to tcm_vhost.

You can create or delete a LUN in targetcli to hotplug or hotunplug a
LUN in guest.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 206 ++++++++++++++++++++++++++++++++++++++++++++--
 drivers/vhost/tcm_vhost.h |  10 +++
 2 files changed, 211 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index f9daf39..9be8879 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -62,11 +62,16 @@ enum {
 
 #define VHOST_SCSI_MAX_TARGET	256
 #define VHOST_SCSI_MAX_VQ	128
+#define VHOST_SCSI_MAX_EVENT	128
+
+#define VHOST_SCSI_FEATURES (VHOST_FEATURES | (1ULL << VIRTIO_SCSI_F_HOTPLUG))
 
 struct vhost_scsi {
 	/* Protected by vhost_scsi->dev.mutex */
 	struct tcm_vhost_tpg *vs_tpg[VHOST_SCSI_MAX_TARGET];
 	char vs_vhost_wwpn[TRANSPORT_IQN_LEN];
+	bool vs_events_dropped; /* any missed events */
+	atomic_t vs_events_nr; /* number of pennding events */
 	bool vs_endpoint;
 
 	struct vhost_dev dev;
@@ -74,6 +79,10 @@ struct vhost_scsi {
 
 	struct vhost_work vs_completion_work; /* cmd completion work item */
 	struct llist_head vs_completion_list; /* cmd completion queue */
+
+	struct vhost_work vs_event_work; /* evt injection work item */
+	struct llist_head vs_event_list; /* evt injection queue */
+
 };
 
 /* Local pointer to allocated TCM configfs fabric module */
@@ -117,6 +126,16 @@ static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
 	return ret;
 }
 
+static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs)
+{
+	bool ret;
+
+	mutex_lock(&vs->dev.mutex);
+	ret = vs->vs_events_dropped;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
@@ -367,6 +386,31 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd)
 	return 0;
 }
 
+static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
+{
+	atomic_dec(&vs->vs_events_nr);
+	kfree(evt);
+}
+
+static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
+	u32 event, u32 reason)
+{
+	struct tcm_vhost_evt *evt;
+
+	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
+		return NULL;
+
+	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
+
+	if (evt) {
+		atomic_inc(&vs->vs_events_nr);
+		evt->event.event = event;
+		evt->event.reason = reason;
+	}
+
+	return evt;
+}
+
 static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 {
 	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
@@ -385,6 +429,77 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 	kfree(tv_cmd);
 }
 
+static void tcm_vhost_do_evt_work(struct vhost_scsi *vs,
+	struct virtio_scsi_event *event)
+{
+	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT];
+	struct virtio_scsi_event __user *eventp;
+	unsigned out, in;
+	int head, ret;
+
+	if (!tcm_vhost_check_endpoint(vs))
+		return;
+
+	mutex_lock(&vq->mutex);
+again:
+	vhost_disable_notify(&vs->dev, vq);
+	head = vhost_get_vq_desc(&vs->dev, vq, vq->iov,
+			ARRAY_SIZE(vq->iov), &out, &in,
+			NULL, NULL);
+	if (head < 0) {
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		goto out;
+	}
+	if (head == vq->num) {
+		if (vhost_enable_notify(&vs->dev, vq))
+			goto again;
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		goto out;
+	}
+
+	if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) {
+		vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n",
+				vq->iov[out].iov_len);
+		goto out;
+	}
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_events_dropped) {
+		event->event |= VIRTIO_SCSI_T_EVENTS_MISSED;
+		vs->vs_events_dropped = false;
+	}
+	mutex_unlock(&vs->dev.mutex);
+
+	eventp = vq->iov[out].iov_base;
+	ret = __copy_to_user(eventp, event, sizeof(*event));
+	if (!ret)
+		vhost_add_used_and_signal(&vs->dev, vq, head, 0);
+	else
+		vq_err(vq, "Faulted on tcm_vhost_send_event\n");
+out:
+	mutex_unlock(&vq->mutex);
+}
+
+static void tcm_vhost_evt_work(struct vhost_work *work)
+{
+	struct vhost_scsi *vs = container_of(work, struct vhost_scsi,
+					vs_event_work);
+	struct tcm_vhost_evt *evt;
+	struct llist_node *llnode;
+
+	llnode = llist_del_all(&vs->vs_event_list);
+	while (llnode) {
+		evt = llist_entry(llnode, struct tcm_vhost_evt, list);
+		llnode = llist_next(llnode);
+		tcm_vhost_do_evt_work(vs, &evt->event);
+		tcm_vhost_free_evt(vs, evt);
+	}
+}
+
 /* Fill in status and signal that we are done processing this command
  *
  * This is scheduled in the vhost work queue so we are called with the owner
@@ -783,9 +898,46 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
 	pr_debug("%s: The handling func for control queue.\n", __func__);
 }
 
+static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg,
+	struct se_lun *lun, u32 event, u32 reason)
+{
+	struct tcm_vhost_evt *evt;
+
+	if (!tcm_vhost_check_endpoint(vs))
+		return -EOPNOTSUPP;
+
+	evt = tcm_vhost_allocate_evt(vs, event, reason);
+	if (!evt) {
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		return -ENOMEM;
+	}
+
+	if (tpg && lun) {
+		/* TODO: share lun setup code with virtio-scsi.ko */
+		evt->event.lun[0] = 0x01;
+		evt->event.lun[1] = tpg->tport_tpgt & 0xFF;
+		if (lun->unpacked_lun >= 256)
+			evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ;
+		evt->event.lun[3] = lun->unpacked_lun & 0xFF;
+	}
+
+	llist_add(&evt->list, &vs->vs_event_list);
+	vhost_work_queue(&vs->dev, &vs->vs_event_work);
+
+	return 0;
+}
+
 static void vhost_scsi_evt_handle_kick(struct vhost_work *work)
 {
-	pr_debug("%s: The handling func for event queue.\n", __func__);
+	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
+						poll.work);
+	struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev);
+
+	if (tcm_vhost_check_events_dropped(vs))
+		tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+
 }
 
 static void vhost_scsi_handle_kick(struct vhost_work *work)
@@ -841,6 +993,7 @@ static int vhost_scsi_set_endpoint(
 				return -EEXIST;
 			}
 			tv_tpg->tv_tpg_vhost_count++;
+			tv_tpg->vhost_scsi = vs;
 			vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
 			smp_mb__after_atomic_inc();
 			match = true;
@@ -904,6 +1057,7 @@ static int vhost_scsi_clear_endpoint(
 			goto err;
 		}
 		tv_tpg->tv_tpg_vhost_count--;
+		tv_tpg->vhost_scsi = NULL;
 		vs->vs_tpg[target] = NULL;
 		vs->vs_endpoint = false;
 		mutex_unlock(&tv_tpg->tv_tpg_mutex);
@@ -926,6 +1080,9 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
 		return -ENOMEM;
 
 	vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work);
+	vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work);
+
+	atomic_set(&s->vs_events_nr, 0);
 
 	s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick;
 	s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick;
@@ -971,7 +1128,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 
 static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
 {
-	if (features & ~VHOST_FEATURES)
+	if (features & ~VHOST_SCSI_FEATURES)
 		return -EOPNOTSUPP;
 
 	mutex_lock(&vs->dev.mutex);
@@ -1017,7 +1174,7 @@ static long vhost_scsi_ioctl(struct file *f, unsigned int ioctl,
 			return -EFAULT;
 		return 0;
 	case VHOST_GET_FEATURES:
-		features = VHOST_FEATURES;
+		features = VHOST_SCSI_FEATURES;
 		if (copy_to_user(featurep, &features, sizeof features))
 			return -EFAULT;
 		return 0;
@@ -1087,6 +1244,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport)
 	return "Unknown";
 }
 
+static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
+{
+	struct vhost_scsi *vs = tpg->vhost_scsi;
+
+	mutex_lock(&tpg->tv_tpg_mutex);
+	vs = tpg->vhost_scsi;
+	mutex_unlock(&tpg->tv_tpg_mutex);
+	if (!vs)
+		return -EOPNOTSUPP;
+
+	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
+		return -EOPNOTSUPP;
+
+	return tcm_vhost_send_evt(vs, tpg, lun,
+			VIRTIO_SCSI_T_TRANSPORT_RESET,
+			VIRTIO_SCSI_EVT_RESET_RESCAN);
+}
+
+static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
+{
+	struct vhost_scsi *vs = tpg->vhost_scsi;
+
+	mutex_lock(&tpg->tv_tpg_mutex);
+	vs = tpg->vhost_scsi;
+	mutex_unlock(&tpg->tv_tpg_mutex);
+	if (!vs)
+		return -EOPNOTSUPP;
+
+	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
+		return -EOPNOTSUPP;
+
+	return tcm_vhost_send_evt(vs, tpg, lun,
+			VIRTIO_SCSI_T_TRANSPORT_RESET,
+			VIRTIO_SCSI_EVT_RESET_REMOVED);
+}
+
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 	struct se_lun *lun)
 {
@@ -1097,18 +1290,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 	tv_tpg->tv_tpg_port_count++;
 	mutex_unlock(&tv_tpg->tv_tpg_mutex);
 
+	tcm_vhost_hotplug(tv_tpg, lun);
+
 	return 0;
 }
 
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
-	struct se_lun *se_lun)
+	struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
-
 	mutex_lock(&tv_tpg->tv_tpg_mutex);
 	tv_tpg->tv_tpg_port_count--;
 	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+
+	tcm_vhost_hotunplug(tv_tpg, lun);
 }
 
 static struct se_node_acl *tcm_vhost_make_nodeacl(
diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
index 1d2ae7a..94e9ee53 100644
--- a/drivers/vhost/tcm_vhost.h
+++ b/drivers/vhost/tcm_vhost.h
@@ -53,6 +53,7 @@ struct tcm_vhost_nacl {
 	struct se_node_acl se_node_acl;
 };
 
+struct vhost_scsi;
 struct tcm_vhost_tpg {
 	/* Vhost port target portal group tag for TCM */
 	u16 tport_tpgt;
@@ -70,6 +71,8 @@ struct tcm_vhost_tpg {
 	struct tcm_vhost_tport *tport;
 	/* Returned by tcm_vhost_make_tpg() */
 	struct se_portal_group se_tpg;
+	/* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */
+	struct vhost_scsi *vhost_scsi;
 };
 
 struct tcm_vhost_tport {
@@ -83,6 +86,13 @@ struct tcm_vhost_tport {
 	struct se_wwn tport_wwn;
 };
 
+struct tcm_vhost_evt {
+	/* virtio_scsi event */
+	struct virtio_scsi_event event;
+	/* virtio_scsi event list, serviced from vhost worker thread */
+	struct llist_node list;
+};
+
 /*
  * As per request from MST, keep TCM_VHOST related ioctl defines out of
  * linux/vhost.h (user-space) for now..
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature()
  2013-03-06  6:16 ` [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
@ 2013-03-06  9:06   ` Stefan Hajnoczi
  2013-03-07  0:35     ` Asias He
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-03-06  9:06 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Wed, Mar 06, 2013 at 02:16:27PM +0800, Asias He wrote:
> This helper is useful to check if a feature is supported.
> 
> Signed-off-by: Asias He <asias@redhat.com>
> ---
>  drivers/vhost/tcm_vhost.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index b3e50d7..fdbf986 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -91,6 +91,20 @@ static int iov_num_pages(struct iovec *iov)
>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
>  }
>  
> +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, u64 feature)
> +{
> +	u64 acked_features;
> +	bool ret = false;
> +
> +	mutex_lock(&vs->dev.mutex);
> +	acked_features = vs->dev.acked_features;
> +	if (acked_features & 1ULL << feature)
> +		ret = true;
> +	mutex_unlock(&vs->dev.mutex);
> +
> +	return ret;
> +}

This is like vhost_has_feature() except it acquires dev.mutex?

In any case it isn't tcm_vhost-specific and could be in vhost.c.

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
       [not found] ` <1362550590-3534-6-git-send-email-asias@redhat.com>
@ 2013-03-06  9:21   ` Stefan Hajnoczi
  2013-03-07  0:26     ` Asias He
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-03-06  9:21 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> +	u32 event, u32 reason)
> +{
> +	struct tcm_vhost_evt *evt;
> +
> +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> +		return NULL;
> +
> +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> +
> +	if (evt) {
> +		atomic_inc(&vs->vs_events_nr);

This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
first and then incrementing later isn't atomic!

> +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> +{
> +	struct vhost_scsi *vs = tpg->vhost_scsi;
> +
> +	mutex_lock(&tpg->tv_tpg_mutex);
> +	vs = tpg->vhost_scsi;
> +	mutex_unlock(&tpg->tv_tpg_mutex);
> +	if (!vs)
> +		return -EOPNOTSUPP;
> +
> +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> +		return -EOPNOTSUPP;
> +
> +	return tcm_vhost_send_evt(vs, tpg, lun,
> +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> +}

tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
the code duplication can be eliminated.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-06  9:21   ` Stefan Hajnoczi
@ 2013-03-07  0:26     ` Asias He
  2013-03-07  8:58       ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: Asias He @ 2013-03-07  0:26 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Wed, Mar 06, 2013 at 10:21:09AM +0100, Stefan Hajnoczi wrote:
> On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> > +	u32 event, u32 reason)
> > +{
> > +	struct tcm_vhost_evt *evt;
> > +
> > +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> > +		return NULL;
> > +
> > +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> > +
> > +	if (evt) {
> > +		atomic_inc(&vs->vs_events_nr);
> 
> This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
> first and then incrementing later isn't atomic!

This does not matter. (1) and (2) are okay. In case (3), the other side
can only decrease the number of event, the limit will not be exceeded.

(1) 
   		   atomic_dec()
   atomic_read() 
   atomic_inc()
(2)
   atomic_read() 
   atomic_inc()
   		   atomic_dec()

(3)
   atomic_read() 
   		   atomic_dec()
   atomic_inc()

          
> > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> > +{
> > +	struct vhost_scsi *vs = tpg->vhost_scsi;
> > +
> > +	mutex_lock(&tpg->tv_tpg_mutex);
> > +	vs = tpg->vhost_scsi;
> > +	mutex_unlock(&tpg->tv_tpg_mutex);
> > +	if (!vs)
> > +		return -EOPNOTSUPP;
> > +
> > +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> > +		return -EOPNOTSUPP;
> > +
> > +	return tcm_vhost_send_evt(vs, tpg, lun,
> > +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> > +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> > +}
> 
> tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
> except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
> VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
> the code duplication can be eliminated.

I thought about this also. We can have a tcm_vhost_do_hotplug() helper.

   tcm_vhost_do_hotplug(tpg, lun, plug)
   
   tcm_vhost_hotplug() {
   	tcm_vhost_do_hotplug(tpg, lun, true)
   }
   
   tcm_vhost_hotunplug() {
   	tcm_vhost_do_hotplug(tpg, lun, false)
   }

The reason I did not do that is I do not like the true/false argument
but anyway this could remove duplication. I will do it.

-- 
Asias

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature()
  2013-03-06  9:06   ` Stefan Hajnoczi
@ 2013-03-07  0:35     ` Asias He
  0 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-07  0:35 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Wed, Mar 06, 2013 at 10:06:20AM +0100, Stefan Hajnoczi wrote:
> On Wed, Mar 06, 2013 at 02:16:27PM +0800, Asias He wrote:
> > This helper is useful to check if a feature is supported.
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > ---
> >  drivers/vhost/tcm_vhost.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index b3e50d7..fdbf986 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -91,6 +91,20 @@ static int iov_num_pages(struct iovec *iov)
> >  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
> >  }
> >  
> > +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, u64 feature)
> > +{
> > +	u64 acked_features;
> > +	bool ret = false;
> > +
> > +	mutex_lock(&vs->dev.mutex);
> > +	acked_features = vs->dev.acked_features;
> > +	if (acked_features & 1ULL << feature)
> > +		ret = true;
> > +	mutex_unlock(&vs->dev.mutex);
> > +
> > +	return ret;
> > +}
> 
> This is like vhost_has_feature() except it acquires dev.mutex?
> 
> In any case it isn't tcm_vhost-specific and could be in vhost.c.

tcm_vhost_check_feature() is called outside the vhost worker thread. So
the dev.mutex is needed. 

Mst, what's your preference here? Add vhost_has_feature_locked() in
vhost.c or I use vhost_has_feature() in tcm_vhost_check_feature().

Thanks.
-- 
Asias

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-07  0:26     ` Asias He
@ 2013-03-07  8:58       ` Stefan Hajnoczi
  2013-03-07  9:47         ` Asias He
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-03-07  8:58 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

On Thu, Mar 07, 2013 at 08:26:20AM +0800, Asias He wrote:
> On Wed, Mar 06, 2013 at 10:21:09AM +0100, Stefan Hajnoczi wrote:
> > On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> > > +	u32 event, u32 reason)
> > > +{
> > > +	struct tcm_vhost_evt *evt;
> > > +
> > > +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> > > +		return NULL;
> > > +
> > > +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> > > +
> > > +	if (evt) {
> > > +		atomic_inc(&vs->vs_events_nr);
> > 
> > This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
> > first and then incrementing later isn't atomic!
> 
> This does not matter. (1) and (2) are okay. In case (3), the other side
> can only decrease the number of event, the limit will not be exceeded.
> 
> (1) 
>    		   atomic_dec()
>    atomic_read() 
>    atomic_inc()
> (2)
>    atomic_read() 
>    atomic_inc()
>    		   atomic_dec()
> 
> (3)
>    atomic_read() 
>    		   atomic_dec()
>    atomic_inc()

The cases you listed are fine but I'm actually concerned about
tcm_vhost_allocate_evt() racing with itself.  There are 3 callers and
I'm not sure which lock prevents them from executing at the same time.

> > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> > > +{
> > > +	struct vhost_scsi *vs = tpg->vhost_scsi;
> > > +
> > > +	mutex_lock(&tpg->tv_tpg_mutex);
> > > +	vs = tpg->vhost_scsi;
> > > +	mutex_unlock(&tpg->tv_tpg_mutex);
> > > +	if (!vs)
> > > +		return -EOPNOTSUPP;
> > > +
> > > +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> > > +		return -EOPNOTSUPP;
> > > +
> > > +	return tcm_vhost_send_evt(vs, tpg, lun,
> > > +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > +}
> > 
> > tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
> > except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
> > VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
> > the code duplication can be eliminated.
> 
> I thought about this also. We can have a tcm_vhost_do_hotplug() helper.
> 
>    tcm_vhost_do_hotplug(tpg, lun, plug)
>    
>    tcm_vhost_hotplug() {
>    	tcm_vhost_do_hotplug(tpg, lun, true)
>    }
>    
>    tcm_vhost_hotunplug() {
>    	tcm_vhost_do_hotplug(tpg, lun, false)
>    }
> 
> The reason I did not do that is I do not like the true/false argument
> but anyway this could remove duplication. I will do it.

true/false makes the calling code hard to read, I suggest passing in
VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED as the
argument.

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-07  8:58       ` Stefan Hajnoczi
@ 2013-03-07  9:47         ` Asias He
  2013-03-07 12:53           ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: Asias He @ 2013-03-07  9:47 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

On Thu, Mar 07, 2013 at 09:58:04AM +0100, Stefan Hajnoczi wrote:
> On Thu, Mar 07, 2013 at 08:26:20AM +0800, Asias He wrote:
> > On Wed, Mar 06, 2013 at 10:21:09AM +0100, Stefan Hajnoczi wrote:
> > > On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> > > > +	u32 event, u32 reason)
> > > > +{
> > > > +	struct tcm_vhost_evt *evt;
> > > > +
> > > > +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> > > > +		return NULL;
> > > > +
> > > > +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> > > > +
> > > > +	if (evt) {
> > > > +		atomic_inc(&vs->vs_events_nr);
> > > 
> > > This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
> > > first and then incrementing later isn't atomic!
> > 
> > This does not matter. (1) and (2) are okay. In case (3), the other side
> > can only decrease the number of event, the limit will not be exceeded.
> > 
> > (1) 
> >    		   atomic_dec()
> >    atomic_read() 
> >    atomic_inc()
> > (2)
> >    atomic_read() 
> >    atomic_inc()
> >    		   atomic_dec()
> > 
> > (3)
> >    atomic_read() 
> >    		   atomic_dec()
> >    atomic_inc()
> 
> The cases you listed are fine but I'm actually concerned about
> tcm_vhost_allocate_evt() racing with itself.  There are 3 callers and
> I'm not sure which lock prevents them from executing at the same time.

No lock to prevent it. But what is the racing of executing
tcm_vhost_allocate_evt() at the same time?

> > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> > > > +{
> > > > +	struct vhost_scsi *vs = tpg->vhost_scsi;
> > > > +
> > > > +	mutex_lock(&tpg->tv_tpg_mutex);
> > > > +	vs = tpg->vhost_scsi;
> > > > +	mutex_unlock(&tpg->tv_tpg_mutex);
> > > > +	if (!vs)
> > > > +		return -EOPNOTSUPP;
> > > > +
> > > > +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> > > > +		return -EOPNOTSUPP;
> > > > +
> > > > +	return tcm_vhost_send_evt(vs, tpg, lun,
> > > > +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > > +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > > +}
> > > 
> > > tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
> > > except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
> > > VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
> > > the code duplication can be eliminated.
> > 
> > I thought about this also. We can have a tcm_vhost_do_hotplug() helper.
> > 
> >    tcm_vhost_do_hotplug(tpg, lun, plug)
> >    
> >    tcm_vhost_hotplug() {
> >    	tcm_vhost_do_hotplug(tpg, lun, true)
> >    }
> >    
> >    tcm_vhost_hotunplug() {
> >    	tcm_vhost_do_hotplug(tpg, lun, false)
> >    }
> > 
> > The reason I did not do that is I do not like the true/false argument
> > but anyway this could remove duplication. I will do it.
> 
> true/false makes the calling code hard to read, I suggest passing in
> VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED as the
> argument.

Yes. However, I think passing VIRTIO_SCSI_EVT_RESET_* is even worse.

1) Having VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED
around VIRTIO_SCSI_T_TRANSPORT_RESET would be nicer.

2) tcm_vhost_do_hotplug(tpg, lun, VIRTIO_SCSI_EVT_RESET_*)
doest not make much sense. What the hell is VIRTIO_SCSI_EVT_RESET_* when
you do hotplug or hotunplug. In contrast, if we have
tcm_vhost_do_hotplug(tpg, lun, plug), plug means doing hotplug or
hotunplug.

> Stefan

-- 
Asias

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-07  9:47         ` Asias He
@ 2013-03-07 12:53           ` Stefan Hajnoczi
  2013-03-08  2:11             ` Asias He
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-03-07 12:53 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Thu, Mar 07, 2013 at 05:47:26PM +0800, Asias He wrote:
> On Thu, Mar 07, 2013 at 09:58:04AM +0100, Stefan Hajnoczi wrote:
> > On Thu, Mar 07, 2013 at 08:26:20AM +0800, Asias He wrote:
> > > On Wed, Mar 06, 2013 at 10:21:09AM +0100, Stefan Hajnoczi wrote:
> > > > On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> > > > > +	u32 event, u32 reason)
> > > > > +{
> > > > > +	struct tcm_vhost_evt *evt;
> > > > > +
> > > > > +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> > > > > +		return NULL;
> > > > > +
> > > > > +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> > > > > +
> > > > > +	if (evt) {
> > > > > +		atomic_inc(&vs->vs_events_nr);
> > > > 
> > > > This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
> > > > first and then incrementing later isn't atomic!
> > > 
> > > This does not matter. (1) and (2) are okay. In case (3), the other side
> > > can only decrease the number of event, the limit will not be exceeded.
> > > 
> > > (1) 
> > >    		   atomic_dec()
> > >    atomic_read() 
> > >    atomic_inc()
> > > (2)
> > >    atomic_read() 
> > >    atomic_inc()
> > >    		   atomic_dec()
> > > 
> > > (3)
> > >    atomic_read() 
> > >    		   atomic_dec()
> > >    atomic_inc()
> > 
> > The cases you listed are fine but I'm actually concerned about
> > tcm_vhost_allocate_evt() racing with itself.  There are 3 callers and
> > I'm not sure which lock prevents them from executing at the same time.
> 
> No lock to prevent it. But what is the racing of executing
> tcm_vhost_allocate_evt() at the same time?

atomic_read() <= VHOST_SCSI_MAX_EVENT
                                       atomic_read() <= VHOST_SCSI_MAX_EVENT
atomic_inc()
                                       atomic_inc()

Now vs->vs_events_nr == VHOST_SCSI_MAX_EVENT + 1 which the if statement
was supposed to prevent.

> > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> > > > > +{
> > > > > +	struct vhost_scsi *vs = tpg->vhost_scsi;
> > > > > +
> > > > > +	mutex_lock(&tpg->tv_tpg_mutex);
> > > > > +	vs = tpg->vhost_scsi;
> > > > > +	mutex_unlock(&tpg->tv_tpg_mutex);
> > > > > +	if (!vs)
> > > > > +		return -EOPNOTSUPP;
> > > > > +
> > > > > +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> > > > > +		return -EOPNOTSUPP;
> > > > > +
> > > > > +	return tcm_vhost_send_evt(vs, tpg, lun,
> > > > > +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > > > +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > > > +}
> > > > 
> > > > tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
> > > > except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
> > > > VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
> > > > the code duplication can be eliminated.
> > > 
> > > I thought about this also. We can have a tcm_vhost_do_hotplug() helper.
> > > 
> > >    tcm_vhost_do_hotplug(tpg, lun, plug)
> > >    
> > >    tcm_vhost_hotplug() {
> > >    	tcm_vhost_do_hotplug(tpg, lun, true)
> > >    }
> > >    
> > >    tcm_vhost_hotunplug() {
> > >    	tcm_vhost_do_hotplug(tpg, lun, false)
> > >    }
> > > 
> > > The reason I did not do that is I do not like the true/false argument
> > > but anyway this could remove duplication. I will do it.
> > 
> > true/false makes the calling code hard to read, I suggest passing in
> > VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED as the
> > argument.
> 
> Yes. However, I think passing VIRTIO_SCSI_EVT_RESET_* is even worse.
> 
> 1) Having VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED
> around VIRTIO_SCSI_T_TRANSPORT_RESET would be nicer.
> 
> 2) tcm_vhost_do_hotplug(tpg, lun, VIRTIO_SCSI_EVT_RESET_*)
> doest not make much sense. What the hell is VIRTIO_SCSI_EVT_RESET_* when
> you do hotplug or hotunplug. In contrast, if we have
> tcm_vhost_do_hotplug(tpg, lun, plug), plug means doing hotplug or
> hotunplug.

The VIRTIO_SCSI_EVT_RESET_REMOVED constant is pretty clear ("removed"
means unplug).  The VIRTIO_SCSI_EVT_RESET_RESCAN is less clear, but this
code is in drivers/vhost/tcm_vhost.c so you can expect the reader to
know the device specification :).

Anyway, it's not the end of the world if you leave the duplicated code
in, use a boolean parameter, or use the virtio event constant.

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support
  2013-03-07 12:53           ` Stefan Hajnoczi
@ 2013-03-08  2:11             ` Asias He
  0 siblings, 0 replies; 14+ messages in thread
From: Asias He @ 2013-03-08  2:11 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Thu, Mar 07, 2013 at 01:53:05PM +0100, Stefan Hajnoczi wrote:
> On Thu, Mar 07, 2013 at 05:47:26PM +0800, Asias He wrote:
> > On Thu, Mar 07, 2013 at 09:58:04AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Mar 07, 2013 at 08:26:20AM +0800, Asias He wrote:
> > > > On Wed, Mar 06, 2013 at 10:21:09AM +0100, Stefan Hajnoczi wrote:
> > > > > On Wed, Mar 06, 2013 at 02:16:30PM +0800, Asias He wrote:
> > > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
> > > > > > +	u32 event, u32 reason)
> > > > > > +{
> > > > > > +	struct tcm_vhost_evt *evt;
> > > > > > +
> > > > > > +	if (atomic_read(&vs->vs_events_nr) > VHOST_SCSI_MAX_EVENT)
> > > > > > +		return NULL;
> > > > > > +
> > > > > > +	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
> > > > > > +
> > > > > > +	if (evt) {
> > > > > > +		atomic_inc(&vs->vs_events_nr);
> > > > > 
> > > > > This looks suspicious: checking vs_events_nr > VHOST_SCSI_MAX_EVENT
> > > > > first and then incrementing later isn't atomic!
> > > > 
> > > > This does not matter. (1) and (2) are okay. In case (3), the other side
> > > > can only decrease the number of event, the limit will not be exceeded.
> > > > 
> > > > (1) 
> > > >    		   atomic_dec()
> > > >    atomic_read() 
> > > >    atomic_inc()
> > > > (2)
> > > >    atomic_read() 
> > > >    atomic_inc()
> > > >    		   atomic_dec()
> > > > 
> > > > (3)
> > > >    atomic_read() 
> > > >    		   atomic_dec()
> > > >    atomic_inc()
> > > 
> > > The cases you listed are fine but I'm actually concerned about
> > > tcm_vhost_allocate_evt() racing with itself.  There are 3 callers and
> > > I'm not sure which lock prevents them from executing at the same time.
> > 
> > No lock to prevent it. But what is the racing of executing
> > tcm_vhost_allocate_evt() at the same time?
> 
> atomic_read() <= VHOST_SCSI_MAX_EVENT
>                                        atomic_read() <= VHOST_SCSI_MAX_EVENT
> atomic_inc()
>                                        atomic_inc()
> 
> Now vs->vs_events_nr == VHOST_SCSI_MAX_EVENT + 1 which the if statement
> was supposed to prevent.

Right. Okay, I will switch to lock.

> > > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
> > > > > > +{
> > > > > > +	struct vhost_scsi *vs = tpg->vhost_scsi;
> > > > > > +
> > > > > > +	mutex_lock(&tpg->tv_tpg_mutex);
> > > > > > +	vs = tpg->vhost_scsi;
> > > > > > +	mutex_unlock(&tpg->tv_tpg_mutex);
> > > > > > +	if (!vs)
> > > > > > +		return -EOPNOTSUPP;
> > > > > > +
> > > > > > +	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
> > > > > > +		return -EOPNOTSUPP;
> > > > > > +
> > > > > > +	return tcm_vhost_send_evt(vs, tpg, lun,
> > > > > > +			VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > > > > +			VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > > > > +}
> > > > > 
> > > > > tcm_vhost_hotplug() and tcm_vhost_hotunplug() are the same function
> > > > > except for VIRTIO_SCSI_EVT_RESET_RESCAN vs
> > > > > VIRTIO_SCSI_EVT_RESET_REMOVED.  That can be passed in as an argument and
> > > > > the code duplication can be eliminated.
> > > > 
> > > > I thought about this also. We can have a tcm_vhost_do_hotplug() helper.
> > > > 
> > > >    tcm_vhost_do_hotplug(tpg, lun, plug)
> > > >    
> > > >    tcm_vhost_hotplug() {
> > > >    	tcm_vhost_do_hotplug(tpg, lun, true)
> > > >    }
> > > >    
> > > >    tcm_vhost_hotunplug() {
> > > >    	tcm_vhost_do_hotplug(tpg, lun, false)
> > > >    }
> > > > 
> > > > The reason I did not do that is I do not like the true/false argument
> > > > but anyway this could remove duplication. I will do it.
> > > 
> > > true/false makes the calling code hard to read, I suggest passing in
> > > VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED as the
> > > argument.
> > 
> > Yes. However, I think passing VIRTIO_SCSI_EVT_RESET_* is even worse.
> > 
> > 1) Having VIRTIO_SCSI_EVT_RESET_RESCAN or VIRTIO_SCSI_EVT_RESET_REMOVED
> > around VIRTIO_SCSI_T_TRANSPORT_RESET would be nicer.
> > 
> > 2) tcm_vhost_do_hotplug(tpg, lun, VIRTIO_SCSI_EVT_RESET_*)
> > doest not make much sense. What the hell is VIRTIO_SCSI_EVT_RESET_* when
> > you do hotplug or hotunplug. In contrast, if we have
> > tcm_vhost_do_hotplug(tpg, lun, plug), plug means doing hotplug or
> > hotunplug.
> 
> The VIRTIO_SCSI_EVT_RESET_REMOVED constant is pretty clear ("removed"
> means unplug).  The VIRTIO_SCSI_EVT_RESET_RESCAN is less clear, but this
> code is in drivers/vhost/tcm_vhost.c so you can expect the reader to
> know the device specification :).
> 
> Anyway, it's not the end of the world if you leave the duplicated code
> in, use a boolean parameter, or use the virtio event constant.

Sure. This is not very important ;-)

> Stefan

-- 
Asias

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-03-08  2:11 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-06  6:16 [PATCH 0/5] tcm_vhost hotplug/hotunplug support and locking fix Asias He
2013-03-06  6:16 ` [PATCH 1/5] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
2013-03-06  6:16 ` [PATCH 2/5] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
2013-03-06  9:06   ` Stefan Hajnoczi
2013-03-07  0:35     ` Asias He
2013-03-06  6:16 ` [PATCH 3/5] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
2013-03-06  6:16 ` [PATCH 4/5] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
2013-03-06  6:16 ` [PATCH 5/5] tcm_vhost: Add hotplug/hotunplug support Asias He
     [not found] ` <1362550590-3534-6-git-send-email-asias@redhat.com>
2013-03-06  9:21   ` Stefan Hajnoczi
2013-03-07  0:26     ` Asias He
2013-03-07  8:58       ` Stefan Hajnoczi
2013-03-07  9:47         ` Asias He
2013-03-07 12:53           ` Stefan Hajnoczi
2013-03-08  2:11             ` Asias He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).