virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
@ 2013-03-08  2:21 Asias He
  2013-03-08  2:21 ` [PATCH V2 1/6] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

Changes in v2:
- Remove code duplication in tcm_vhost_{hotplug,hotunplug}
- Fix racing of vs_events_nr
- Add flush fix patch to this series

Asias He (6):
  tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  tcm_vhost: Introduce tcm_vhost_check_feature()
  tcm_vhost: Introduce tcm_vhost_check_endpoint()
  tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  tcm_vhost: Add hotplug/hotunplug support
  tcm_vhost: Flush vhost_work in vhost_scsi_flush()

 drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
 drivers/vhost/tcm_vhost.h |  10 ++
 2 files changed, 247 insertions(+), 6 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH V2 1/6] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  2:21 ` [PATCH V2 2/6] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

tv_tpg->tv_tpg_vhost_count should be protected by tv_tpg->tv_tpg_mutex.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 9951297..b3e50d7 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -860,9 +860,11 @@ static int vhost_scsi_clear_endpoint(
 		if (!tv_tpg)
 			continue;
 
+		mutex_lock(&tv_tpg->tv_tpg_mutex);
 		tv_tport = tv_tpg->tport;
 		if (!tv_tport) {
 			ret = -ENODEV;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 
@@ -872,11 +874,13 @@ static int vhost_scsi_clear_endpoint(
 				tv_tport->tport_name, tv_tpg->tport_tpgt,
 				t->vhost_wwpn, t->vhost_tpgt);
 			ret = -EINVAL;
+			mutex_unlock(&tv_tpg->tv_tpg_mutex);
 			goto err;
 		}
 		tv_tpg->tv_tpg_vhost_count--;
 		vs->vs_tpg[target] = NULL;
 		vs->vs_endpoint = false;
+		mutex_unlock(&tv_tpg->tv_tpg_mutex);
 	}
 	mutex_unlock(&vs->dev.mutex);
 	return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 2/6] tcm_vhost: Introduce tcm_vhost_check_feature()
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
  2013-03-08  2:21 ` [PATCH V2 1/6] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  2:21 ` [PATCH V2 3/6] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

This helper is useful to check if a feature is supported.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b3e50d7..39654f4 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
 	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
 }
 
+static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature)
+{
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vhost_has_feature(&vs->dev, feature))
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 3/6] tcm_vhost: Introduce tcm_vhost_check_endpoint()
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
  2013-03-08  2:21 ` [PATCH V2 1/6] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
  2013-03-08  2:21 ` [PATCH V2 2/6] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  2:21 ` [PATCH V2 4/6] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

This helper is useful to check if vs->vs_endpoint is setup by
vhost_scsi_set_endpoint()

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 39654f4..b71e946 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -103,6 +103,18 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature)
 	return ret;
 }
 
+static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
+{
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_endpoint)
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 4/6] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
                   ` (2 preceding siblings ...)
  2013-03-08  2:21 ` [PATCH V2 3/6] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  2:21 ` [PATCH V2 5/6] tcm_vhost: Add hotplug/hotunplug support Asias He
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

vs->vs_endpoint is protected by the vs->dev.mutex. Use
tcm_vhost_check_endpoint() to do check. The helper does the needed
locking for us.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b71e946..6739d67 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -606,7 +606,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 	u8 target;
 
 	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
-	if (unlikely(!vs->vs_endpoint))
+	if (!tcm_vhost_check_endpoint(vs))
 		return;
 
 	mutex_lock(&vq->mutex);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 5/6] tcm_vhost: Add hotplug/hotunplug support
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
                   ` (3 preceding siblings ...)
  2013-03-08  2:21 ` [PATCH V2 4/6] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  2:21 ` [PATCH V2 6/6] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for
virtio-scsi), hotplug support is added to virtio-scsi.

This patch adds hotplug and hotunplug support to tcm_vhost.

You can create or delete a LUN in targetcli to hotplug or hotunplug a
LUN in guest.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 211 ++++++++++++++++++++++++++++++++++++++++++++--
 drivers/vhost/tcm_vhost.h |  10 +++
 2 files changed, 216 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 6739d67..af80c7f 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -62,6 +62,9 @@ enum {
 
 #define VHOST_SCSI_MAX_TARGET	256
 #define VHOST_SCSI_MAX_VQ	128
+#define VHOST_SCSI_MAX_EVENT	128
+
+#define VHOST_SCSI_FEATURES (VHOST_FEATURES | (1ULL << VIRTIO_SCSI_F_HOTPLUG))
 
 struct vhost_scsi {
 	/* Protected by vhost_scsi->dev.mutex */
@@ -74,6 +77,13 @@ struct vhost_scsi {
 
 	struct vhost_work vs_completion_work; /* cmd completion work item */
 	struct llist_head vs_completion_list; /* cmd completion queue */
+
+	struct vhost_work vs_event_work; /* evt injection work item */
+	struct llist_head vs_event_list; /* evt injection queue */
+
+	bool vs_events_dropped; /* any missed events, protected by dev.mutex */
+	u64 vs_events_nr; /* # of pennding events, protected by dev.mutex */
+
 };
 
 /* Local pointer to allocated TCM configfs fabric module */
@@ -115,6 +125,16 @@ static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
 	return ret;
 }
 
+static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs)
+{
+	bool ret;
+
+	mutex_lock(&vs->dev.mutex);
+	ret = vs->vs_events_dropped;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;
@@ -365,6 +385,35 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd)
 	return 0;
 }
 
+static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt)
+{
+	mutex_lock(&vs->dev.mutex);
+	vs->vs_events_nr--;
+	kfree(evt);
+	mutex_unlock(&vs->dev.mutex);
+}
+
+static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs,
+	u32 event, u32 reason)
+{
+	struct tcm_vhost_evt *evt;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) {
+		mutex_unlock(&vs->dev.mutex);
+		return NULL;
+	}
+
+	evt = kzalloc(sizeof(*evt), GFP_KERNEL);
+	if (evt) {
+		evt->event.event = event;
+		evt->event.reason = reason;
+		vs->vs_events_nr++;
+	}
+	mutex_unlock(&vs->dev.mutex);
+	return evt;
+}
+
 static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 {
 	struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd;
@@ -383,6 +432,77 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd)
 	kfree(tv_cmd);
 }
 
+static void tcm_vhost_do_evt_work(struct vhost_scsi *vs,
+	struct virtio_scsi_event *event)
+{
+	struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT];
+	struct virtio_scsi_event __user *eventp;
+	unsigned out, in;
+	int head, ret;
+
+	if (!tcm_vhost_check_endpoint(vs))
+		return;
+
+	mutex_lock(&vq->mutex);
+again:
+	vhost_disable_notify(&vs->dev, vq);
+	head = vhost_get_vq_desc(&vs->dev, vq, vq->iov,
+			ARRAY_SIZE(vq->iov), &out, &in,
+			NULL, NULL);
+	if (head < 0) {
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		goto out;
+	}
+	if (head == vq->num) {
+		if (vhost_enable_notify(&vs->dev, vq))
+			goto again;
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		goto out;
+	}
+
+	if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) {
+		vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n",
+				vq->iov[out].iov_len);
+		goto out;
+	}
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_events_dropped) {
+		event->event |= VIRTIO_SCSI_T_EVENTS_MISSED;
+		vs->vs_events_dropped = false;
+	}
+	mutex_unlock(&vs->dev.mutex);
+
+	eventp = vq->iov[out].iov_base;
+	ret = __copy_to_user(eventp, event, sizeof(*event));
+	if (!ret)
+		vhost_add_used_and_signal(&vs->dev, vq, head, 0);
+	else
+		vq_err(vq, "Faulted on tcm_vhost_send_event\n");
+out:
+	mutex_unlock(&vq->mutex);
+}
+
+static void tcm_vhost_evt_work(struct vhost_work *work)
+{
+	struct vhost_scsi *vs = container_of(work, struct vhost_scsi,
+					vs_event_work);
+	struct tcm_vhost_evt *evt;
+	struct llist_node *llnode;
+
+	llnode = llist_del_all(&vs->vs_event_list);
+	while (llnode) {
+		evt = llist_entry(llnode, struct tcm_vhost_evt, list);
+		llnode = llist_next(llnode);
+		tcm_vhost_do_evt_work(vs, &evt->event);
+		tcm_vhost_free_evt(vs, evt);
+	}
+}
+
 /* Fill in status and signal that we are done processing this command
  *
  * This is scheduled in the vhost work queue so we are called with the owner
@@ -781,9 +901,46 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work)
 	pr_debug("%s: The handling func for control queue.\n", __func__);
 }
 
+static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg,
+	struct se_lun *lun, u32 event, u32 reason)
+{
+	struct tcm_vhost_evt *evt;
+
+	if (!tcm_vhost_check_endpoint(vs))
+		return -EOPNOTSUPP;
+
+	evt = tcm_vhost_allocate_evt(vs, event, reason);
+	if (!evt) {
+		mutex_lock(&vs->dev.mutex);
+		vs->vs_events_dropped = true;
+		mutex_unlock(&vs->dev.mutex);
+		return -ENOMEM;
+	}
+
+	if (tpg && lun) {
+		/* TODO: share lun setup code with virtio-scsi.ko */
+		evt->event.lun[0] = 0x01;
+		evt->event.lun[1] = tpg->tport_tpgt & 0xFF;
+		if (lun->unpacked_lun >= 256)
+			evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ;
+		evt->event.lun[3] = lun->unpacked_lun & 0xFF;
+	}
+
+	llist_add(&evt->list, &vs->vs_event_list);
+	vhost_work_queue(&vs->dev, &vs->vs_event_work);
+
+	return 0;
+}
+
 static void vhost_scsi_evt_handle_kick(struct vhost_work *work)
 {
-	pr_debug("%s: The handling func for event queue.\n", __func__);
+	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
+						poll.work);
+	struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev);
+
+	if (tcm_vhost_check_events_dropped(vs))
+		tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+
 }
 
 static void vhost_scsi_handle_kick(struct vhost_work *work)
@@ -839,6 +996,7 @@ static int vhost_scsi_set_endpoint(
 				return -EEXIST;
 			}
 			tv_tpg->tv_tpg_vhost_count++;
+			tv_tpg->vhost_scsi = vs;
 			vs->vs_tpg[tv_tpg->tport_tpgt] = tv_tpg;
 			smp_mb__after_atomic_inc();
 			match = true;
@@ -902,6 +1060,7 @@ static int vhost_scsi_clear_endpoint(
 			goto err;
 		}
 		tv_tpg->tv_tpg_vhost_count--;
+		tv_tpg->vhost_scsi = NULL;
 		vs->vs_tpg[target] = NULL;
 		vs->vs_endpoint = false;
 		mutex_unlock(&tv_tpg->tv_tpg_mutex);
@@ -924,6 +1083,9 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
 		return -ENOMEM;
 
 	vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work);
+	vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work);
+
+	s->vs_events_nr = 0;
 
 	s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick;
 	s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick;
@@ -969,7 +1131,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 
 static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
 {
-	if (features & ~VHOST_FEATURES)
+	if (features & ~VHOST_SCSI_FEATURES)
 		return -EOPNOTSUPP;
 
 	mutex_lock(&vs->dev.mutex);
@@ -1015,7 +1177,7 @@ static long vhost_scsi_ioctl(struct file *f, unsigned int ioctl,
 			return -EFAULT;
 		return 0;
 	case VHOST_GET_FEATURES:
-		features = VHOST_FEATURES;
+		features = VHOST_SCSI_FEATURES;
 		if (copy_to_user(featurep, &features, sizeof features))
 			return -EFAULT;
 		return 0;
@@ -1085,6 +1247,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport)
 	return "Unknown";
 }
 
+static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg,
+	struct se_lun *lun, bool plug)
+{
+
+	struct vhost_scsi *vs = tpg->vhost_scsi;
+	u32 reason;
+
+	mutex_lock(&tpg->tv_tpg_mutex);
+	vs = tpg->vhost_scsi;
+	mutex_unlock(&tpg->tv_tpg_mutex);
+	if (!vs)
+		return -EOPNOTSUPP;
+
+	if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG))
+		return -EOPNOTSUPP;
+
+	if (plug)
+		reason = VIRTIO_SCSI_EVT_RESET_RESCAN;
+	else
+		reason = VIRTIO_SCSI_EVT_RESET_REMOVED;
+
+	return tcm_vhost_send_evt(vs, tpg, lun,
+			VIRTIO_SCSI_T_TRANSPORT_RESET,
+			reason);
+}
+
+static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
+{
+	return tcm_vhost_do_plug(tpg, lun, true);
+}
+
+static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun)
+{
+	return tcm_vhost_do_plug(tpg, lun, false);
+}
+
 static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 	struct se_lun *lun)
 {
@@ -1095,18 +1293,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg,
 	tv_tpg->tv_tpg_port_count++;
 	mutex_unlock(&tv_tpg->tv_tpg_mutex);
 
+	tcm_vhost_hotplug(tv_tpg, lun);
+
 	return 0;
 }
 
 static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg,
-	struct se_lun *se_lun)
+	struct se_lun *lun)
 {
 	struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg,
 				struct tcm_vhost_tpg, se_tpg);
-
 	mutex_lock(&tv_tpg->tv_tpg_mutex);
 	tv_tpg->tv_tpg_port_count--;
 	mutex_unlock(&tv_tpg->tv_tpg_mutex);
+
+	tcm_vhost_hotunplug(tv_tpg, lun);
 }
 
 static struct se_node_acl *tcm_vhost_make_nodeacl(
diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
index 1d2ae7a..94e9ee53 100644
--- a/drivers/vhost/tcm_vhost.h
+++ b/drivers/vhost/tcm_vhost.h
@@ -53,6 +53,7 @@ struct tcm_vhost_nacl {
 	struct se_node_acl se_node_acl;
 };
 
+struct vhost_scsi;
 struct tcm_vhost_tpg {
 	/* Vhost port target portal group tag for TCM */
 	u16 tport_tpgt;
@@ -70,6 +71,8 @@ struct tcm_vhost_tpg {
 	struct tcm_vhost_tport *tport;
 	/* Returned by tcm_vhost_make_tpg() */
 	struct se_portal_group se_tpg;
+	/* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */
+	struct vhost_scsi *vhost_scsi;
 };
 
 struct tcm_vhost_tport {
@@ -83,6 +86,13 @@ struct tcm_vhost_tport {
 	struct se_wwn tport_wwn;
 };
 
+struct tcm_vhost_evt {
+	/* virtio_scsi event */
+	struct virtio_scsi_event event;
+	/* virtio_scsi event list, serviced from vhost worker thread */
+	struct llist_node list;
+};
+
 /*
  * As per request from MST, keep TCM_VHOST related ioctl defines out of
  * linux/vhost.h (user-space) for now..
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH V2 6/6] tcm_vhost: Flush vhost_work in vhost_scsi_flush()
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
                   ` (4 preceding siblings ...)
  2013-03-08  2:21 ` [PATCH V2 5/6] tcm_vhost: Add hotplug/hotunplug support Asias He
@ 2013-03-08  2:21 ` Asias He
  2013-03-08  8:24 ` [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Stefan Hajnoczi
  2013-03-11 12:03 ` Michael S. Tsirkin
  7 siblings, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-08  2:21 UTC (permalink / raw)
  To: Nicholas Bellinger
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Stefan Hajnoczi, Paolo Bonzini

We also need to flush the vhost_works. One is the completion vhost_work, the
other is event vhost_work.

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index af80c7f..f80a545 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -1127,6 +1127,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs)
 
 	for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
 		vhost_scsi_flush_vq(vs, i);
+	vhost_work_flush(&vs->dev, &vs->vs_completion_work);
+	vhost_work_flush(&vs->dev, &vs->vs_event_work);
 }
 
 static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
                   ` (5 preceding siblings ...)
  2013-03-08  2:21 ` [PATCH V2 6/6] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
@ 2013-03-08  8:24 ` Stefan Hajnoczi
  2013-03-11 12:03 ` Michael S. Tsirkin
  7 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2013-03-08  8:24 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
	Paolo Bonzini

On Fri, Mar 08, 2013 at 10:21:41AM +0800, Asias He wrote:
> Changes in v2:
> - Remove code duplication in tcm_vhost_{hotplug,hotunplug}
> - Fix racing of vs_events_nr
> - Add flush fix patch to this series
> 
> Asias He (6):
>   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
>   tcm_vhost: Introduce tcm_vhost_check_feature()
>   tcm_vhost: Introduce tcm_vhost_check_endpoint()
>   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
>   tcm_vhost: Add hotplug/hotunplug support
>   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> 
>  drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
>  drivers/vhost/tcm_vhost.h |  10 ++
>  2 files changed, 247 insertions(+), 6 deletions(-)
> 
> -- 
> 1.8.1.4
> 

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
                   ` (6 preceding siblings ...)
  2013-03-08  8:24 ` [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Stefan Hajnoczi
@ 2013-03-11 12:03 ` Michael S. Tsirkin
  2013-03-12  2:31   ` Asias He
  7 siblings, 1 reply; 13+ messages in thread
From: Michael S. Tsirkin @ 2013-03-11 12:03 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Fri, Mar 08, 2013 at 10:21:41AM +0800, Asias He wrote:
> Changes in v2:
> - Remove code duplication in tcm_vhost_{hotplug,hotunplug}
> - Fix racing of vs_events_nr
> - Add flush fix patch to this series
> Asias He (6):
>   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
>   tcm_vhost: Introduce tcm_vhost_check_feature()
>   tcm_vhost: Introduce tcm_vhost_check_endpoint()
>   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
>   tcm_vhost: Add hotplug/hotunplug support
>   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> 

Are all these patches bugfixes?

Please don't add any more features in kernel until qemu starts using
this driver.  We put it in kernel after at the KVM forum it looked like
everyone agrees it's useful and qemu and kvmtool will use it, but it has
been in kernel since July and qemu patches are still outstanding.

If only part of the patches are bugfixes could you please separate them
out and submit for 3.9?

Thanks,

>  drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
>  drivers/vhost/tcm_vhost.h |  10 ++
>  2 files changed, 247 insertions(+), 6 deletions(-)
> 
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-11 12:03 ` Michael S. Tsirkin
@ 2013-03-12  2:31   ` Asias He
  2013-03-12  9:55     ` Michael S. Tsirkin
  0 siblings, 1 reply; 13+ messages in thread
From: Asias He @ 2013-03-12  2:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Mon, Mar 11, 2013 at 02:03:27PM +0200, Michael S. Tsirkin wrote:
> On Fri, Mar 08, 2013 at 10:21:41AM +0800, Asias He wrote:
> > Changes in v2:
> > - Remove code duplication in tcm_vhost_{hotplug,hotunplug}
> > - Fix racing of vs_events_nr
> > - Add flush fix patch to this series
> > Asias He (6):
> >   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
> >   tcm_vhost: Introduce tcm_vhost_check_feature()
> >   tcm_vhost: Introduce tcm_vhost_check_endpoint()
> >   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
> >   tcm_vhost: Add hotplug/hotunplug support
> >   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> > 
> 
> Are all these patches bugfixes?

Nope.

> Please don't add any more features in kernel until qemu starts using
> this driver. 

Why? Why should qemu prevent from improving the driver and the benefit
to the other user of the driver? Kvm tool is already using it since last
Aug. This adds the missing disk hotplug support to kvm tool. Plus, this
is not a pure "feature", it is designed in virtio-scsi spec. 

> We put it in kernel after at the KVM forum it looked like
> everyone agrees it's useful and qemu and kvmtool will use it, but it has
> been in kernel since July and qemu patches are still outstanding.

I am working on the qemu bits where Paolo and Nicholas left.

> If only part of the patches are bugfixes could you please separate them
> out and submit for 3.9?

I will separate patch 2 and 5 for hotplug support and others for
bugfixes.

> Thanks,
> 
> >  drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
> >  drivers/vhost/tcm_vhost.h |  10 ++
> >  2 files changed, 247 insertions(+), 6 deletions(-)
> > 
> > -- 
> > 1.8.1.4

-- 
Asias

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-12  2:31   ` Asias He
@ 2013-03-12  9:55     ` Michael S. Tsirkin
  2013-03-13  3:29       ` Asias He
  2013-03-13  8:11       ` Paolo Bonzini
  0 siblings, 2 replies; 13+ messages in thread
From: Michael S. Tsirkin @ 2013-03-12  9:55 UTC (permalink / raw)
  To: Asias He
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 10:31:09AM +0800, Asias He wrote:
> On Mon, Mar 11, 2013 at 02:03:27PM +0200, Michael S. Tsirkin wrote:
> > On Fri, Mar 08, 2013 at 10:21:41AM +0800, Asias He wrote:
> > > Changes in v2:
> > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug}
> > > - Fix racing of vs_events_nr
> > > - Add flush fix patch to this series
> > > Asias He (6):
> > >   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
> > >   tcm_vhost: Introduce tcm_vhost_check_feature()
> > >   tcm_vhost: Introduce tcm_vhost_check_endpoint()
> > >   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
> > >   tcm_vhost: Add hotplug/hotunplug support
> > >   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> > > 
> > 
> > Are all these patches bugfixes?
> 
> Nope.
> 
> > Please don't add any more features in kernel until qemu starts using
> > this driver. 
> 
> Why? Why should qemu prevent from improving the driver and the benefit
> to the other user of the driver? Kvm tool is already using it since last
> Aug. This adds the missing disk hotplug support to kvm tool.

Basically we shouldn't add stuff to kernel for a single user.
Before we commit to support features forever, let's take the time
to see that the interfaces satisfy the requirements of multiple users.
It doesn't have to be qemu but we need to see several users.

> Plus, this
> is not a pure "feature", it is designed in virtio-scsi spec. 

The spec doesn't say it's a mandatory feature or did I miss something?

> > We put it in kernel after at the KVM forum it looked like
> > everyone agrees it's useful and qemu and kvmtool will use it, but it has
> > been in kernel since July and qemu patches are still outstanding.
> 
> I am working on the qemu bits where Paolo and Nicholas left.

Great.

> > If only part of the patches are bugfixes could you please separate them
> > out and submit for 3.9?
> 
> I will separate patch 2 and 5 for hotplug support and others for
> bugfixes.

Thanks.

> > Thanks,
> > 
> > >  drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
> > >  drivers/vhost/tcm_vhost.h |  10 ++
> > >  2 files changed, 247 insertions(+), 6 deletions(-)
> > > 
> > > -- 
> > > 1.8.1.4
> 
> -- 
> Asias

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-12  9:55     ` Michael S. Tsirkin
@ 2013-03-13  3:29       ` Asias He
  2013-03-13  8:11       ` Paolo Bonzini
  1 sibling, 0 replies; 13+ messages in thread
From: Asias He @ 2013-03-13  3:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini

On Tue, Mar 12, 2013 at 11:55:14AM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 12, 2013 at 10:31:09AM +0800, Asias He wrote:
> > On Mon, Mar 11, 2013 at 02:03:27PM +0200, Michael S. Tsirkin wrote:
> > > On Fri, Mar 08, 2013 at 10:21:41AM +0800, Asias He wrote:
> > > > Changes in v2:
> > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug}
> > > > - Fix racing of vs_events_nr
> > > > - Add flush fix patch to this series
> > > > Asias He (6):
> > > >   tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint()
> > > >   tcm_vhost: Introduce tcm_vhost_check_feature()
> > > >   tcm_vhost: Introduce tcm_vhost_check_endpoint()
> > > >   tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()
> > > >   tcm_vhost: Add hotplug/hotunplug support
> > > >   tcm_vhost: Flush vhost_work in vhost_scsi_flush()
> > > > 
> > > 
> > > Are all these patches bugfixes?
> > 
> > Nope.
> > 
> > > Please don't add any more features in kernel until qemu starts using
> > > this driver. 
> > 
> > Why? Why should qemu prevent from improving the driver and the benefit
> > to the other user of the driver? Kvm tool is already using it since last
> > Aug. This adds the missing disk hotplug support to kvm tool.
> 
> Basically we shouldn't add stuff to kernel for a single user.
> Before we commit to support features forever, let's take the time
> to see that the interfaces satisfy the requirements of multiple users.
> It doesn't have to be qemu but we need to see several users.

Well, this makes make sense to me.

> > Plus, this
> > is not a pure "feature", it is designed in virtio-scsi spec. 
> 
> The spec doesn't say it's a mandatory feature or did I miss something?

No, it's not mandatory.

> > > We put it in kernel after at the KVM forum it looked like
> > > everyone agrees it's useful and qemu and kvmtool will use it, but it has
> > > been in kernel since July and qemu patches are still outstanding.
> > 
> > I am working on the qemu bits where Paolo and Nicholas left.
> 
> Great.
> 
> > > If only part of the patches are bugfixes could you please separate them
> > > out and submit for 3.9?
> > 
> > I will separate patch 2 and 5 for hotplug support and others for
> > bugfixes.
> 
> Thanks.
> 
> > > Thanks,
> > > 
> > > >  drivers/vhost/tcm_vhost.c | 243 ++++++++++++++++++++++++++++++++++++++++++++--
> > > >  drivers/vhost/tcm_vhost.h |  10 ++
> > > >  2 files changed, 247 insertions(+), 6 deletions(-)
> > > > 
> > > > -- 
> > > > 1.8.1.4
> > 
> > -- 
> > Asias

-- 
Asias

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix
  2013-03-12  9:55     ` Michael S. Tsirkin
  2013-03-13  3:29       ` Asias He
@ 2013-03-13  8:11       ` Paolo Bonzini
  1 sibling, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2013-03-13  8:11 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, target-devel, Stefan Hajnoczi

Il 12/03/2013 10:55, Michael S. Tsirkin ha scritto:
> > Why? Why should qemu prevent from improving the driver and the benefit
> > to the other user of the driver? Kvm tool is already using it since last
> > Aug. This adds the missing disk hotplug support to kvm tool.
> 
> Basically we shouldn't add stuff to kernel for a single user.
> Before we commit to support features forever, let's take the time
> to see that the interfaces satisfy the requirements of multiple users.

The userspace components of vhost-scsi do not need any new kernel API to
get this new feature.  In fact, there is no new kernel API introduced by
the hotplug/hotunplug patches.

> > Plus, this is not a pure "feature", it is designed in virtio-scsi spec. 
> 
> The spec doesn't say it's a mandatory feature or did I miss something?

It's not mandatory to support hotplug.  However, it is mandatory for the
device to provide an event virtqueue, even if it will never report an
event and it will do nothing with the virtqueue.  For this reason, a
compliant userspace must already have all the support for this.

Paolo

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-03-13  8:11 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-08  2:21 [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Asias He
2013-03-08  2:21 ` [PATCH V2 1/6] tcm_vhost: Add missed lock in vhost_scsi_clear_endpoint() Asias He
2013-03-08  2:21 ` [PATCH V2 2/6] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
2013-03-08  2:21 ` [PATCH V2 3/6] tcm_vhost: Introduce tcm_vhost_check_endpoint() Asias He
2013-03-08  2:21 ` [PATCH V2 4/6] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq() Asias He
2013-03-08  2:21 ` [PATCH V2 5/6] tcm_vhost: Add hotplug/hotunplug support Asias He
2013-03-08  2:21 ` [PATCH V2 6/6] tcm_vhost: Flush vhost_work in vhost_scsi_flush() Asias He
2013-03-08  8:24 ` [PATCH V2 0/6] tcm_vhost hotplug/hotunplug support and locking/flushing fix Stefan Hajnoczi
2013-03-11 12:03 ` Michael S. Tsirkin
2013-03-12  2:31   ` Asias He
2013-03-12  9:55     ` Michael S. Tsirkin
2013-03-13  3:29       ` Asias He
2013-03-13  8:11       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).