* [PATCH v5 0/3] tcm_vhost hotplug
@ 2013-04-09 8:51 Asias He
2013-04-09 8:51 ` [PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
` (4 more replies)
0 siblings, 5 replies; 37+ messages in thread
From: Asias He @ 2013-04-09 8:51 UTC (permalink / raw)
To: Nicholas Bellinger
Cc: kvm, Michael S. Tsirkin, virtualization, target-devel,
Stefan Hajnoczi, Paolo Bonzini
Asias He (3):
tcm_vhost: Introduce tcm_vhost_check_feature()
tcm_vhost: Add helper to check if endpoint is setup
tcm_vhost: Add hotplug/hotunplug support
drivers/vhost/tcm_vhost.c | 236 +++++++++++++++++++++++++++++++++++++++++++++-
drivers/vhost/tcm_vhost.h | 10 ++
2 files changed, 242 insertions(+), 4 deletions(-)
--
1.8.1.4
^ permalink raw reply [flat|nested] 37+ messages in thread* [PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() 2013-04-09 8:51 [PATCH v5 0/3] tcm_vhost hotplug Asias He @ 2013-04-09 8:51 ` Asias He 2013-04-09 8:51 ` [PATCH v5 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He ` (3 subsequent siblings) 4 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-09 8:51 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini This helper is useful to check if a feature is supported. Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index c127731..f0189bc 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; } +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) +{ + bool ret = false; + + mutex_lock(&vs->dev.mutex); + if (vhost_has_feature(&vs->dev, feature)) + ret = true; + mutex_unlock(&vs->dev.mutex); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v5 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-09 8:51 [PATCH v5 0/3] tcm_vhost hotplug Asias He 2013-04-09 8:51 ` [PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He @ 2013-04-09 8:51 ` Asias He 2013-04-09 8:51 ` [PATCH v5 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He ` (2 subsequent siblings) 4 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-09 8:51 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Signed-off-by: Asias He <asias@redhat.com> --- drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index f0189bc..7069881 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) return ret; } +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) +{ + bool ret = false; + + /* + * We can handle the vq only after the endpoint is setup by calling the + * VHOST_SCSI_SET_ENDPOINT ioctl. + * + * TODO: Check that we are running from vhost_worker which acts + * as read-side critical section for vhost kind of RCU. + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h + */ + if (rcu_dereference_check(vq->private_data, 1)) + ret = true; + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v5 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-09 8:51 [PATCH v5 0/3] tcm_vhost hotplug Asias He 2013-04-09 8:51 ` [PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He 2013-04-09 8:51 ` [PATCH v5 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He @ 2013-04-09 8:51 ` Asias He 2013-04-12 23:10 ` [PATCH v5 0/3] tcm_vhost hotplug Nicholas A. Bellinger [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> 4 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-09 8:51 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for virtio-scsi), hotplug support is added to virtio-scsi. This patch adds hotplug and hotunplug support to tcm_vhost. You can create or delete a LUN in targetcli to hotplug or hotunplug a LUN in guest. Changes in v5: - Switch to int from u64 to vs_events_nr - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt - Do not nest dev mutex within vq mutex - Use vs_events_lock to protect vs_events_dropped and vs_events_nr - Rebase to target/master Changes in v4: - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick Changes in v3: - Separate the bug fix to another thread Changes in v2: - Remove code duplication in tcm_vhost_{hotplug,hotunplug} - Fix racing of vs_events_nr - Add flush fix patch to this series Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 206 +++++++++++++++++++++++++++++++++++++++++++++- drivers/vhost/tcm_vhost.h | 10 +++ 2 files changed, 212 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 7069881..3351ed3 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -66,11 +66,13 @@ enum { * TODO: debug and remove the workaround. */ enum { - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | + (1ULL << VIRTIO_SCSI_F_HOTPLUG) }; #define VHOST_SCSI_MAX_TARGET 256 #define VHOST_SCSI_MAX_VQ 128 +#define VHOST_SCSI_MAX_EVENT 128 struct vhost_scsi { /* Protected by vhost_scsi->dev.mutex */ @@ -82,6 +84,13 @@ struct vhost_scsi { struct vhost_work vs_completion_work; /* cmd completion work item */ struct llist_head vs_completion_list; /* cmd completion queue */ + + struct vhost_work vs_event_work; /* evt injection work item */ + struct llist_head vs_event_list; /* evt injection queue */ + + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ + bool vs_events_dropped; /* any missed events */ + int vs_events_nr; /* num of pending events */ }; /* Local pointer to allocated TCM configfs fabric module */ @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) return ret; } +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) +{ + bool ret; + + mutex_lock(&vs->vs_events_lock); + ret = vs->vs_events_dropped; + mutex_unlock(&vs->vs_events_lock); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) return 0; } +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) +{ + mutex_lock(&vs->vs_events_lock); + vs->vs_events_nr--; + kfree(evt); + mutex_unlock(&vs->vs_events_lock); +} + +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, + u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + mutex_lock(&vs->vs_events_lock); + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { + vs->vs_events_dropped = true; + mutex_unlock(&vs->vs_events_lock); + return NULL; + } + + evt = kzalloc(sizeof(*evt), GFP_KERNEL); + if (evt) { + evt->event.event = event; + evt->event.reason = reason; + vs->vs_events_nr++; + } + mutex_unlock(&vs->vs_events_lock); + + return evt; +} + static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) { struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; @@ -397,6 +448,74 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) kfree(tv_cmd); } +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, + struct virtio_scsi_event *event) +{ + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; + struct virtio_scsi_event __user *eventp; + unsigned out, in; + int head, ret; + + if (!tcm_vhost_check_endpoint(vq)) + return; + + mutex_lock(&vs->vs_events_lock); + mutex_lock(&vq->mutex); +again: + vhost_disable_notify(&vs->dev, vq); + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, + ARRAY_SIZE(vq->iov), &out, &in, + NULL, NULL); + if (head < 0) { + vs->vs_events_dropped = true; + goto out; + } + if (head == vq->num) { + if (vhost_enable_notify(&vs->dev, vq)) + goto again; + vs->vs_events_dropped = true; + goto out; + } + + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", + vq->iov[out].iov_len); + vs->vs_events_dropped = true; + goto out; + } + + if (vs->vs_events_dropped) { + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; + vs->vs_events_dropped = false; + } + + eventp = vq->iov[out].iov_base; + ret = __copy_to_user(eventp, event, sizeof(*event)); + if (!ret) + vhost_add_used_and_signal(&vs->dev, vq, head, 0); + else + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); +out: + mutex_unlock(&vq->mutex); + mutex_unlock(&vs->vs_events_lock); +} + +static void tcm_vhost_evt_work(struct vhost_work *work) +{ + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, + vs_event_work); + struct tcm_vhost_evt *evt; + struct llist_node *llnode; + + llnode = llist_del_all(&vs->vs_event_list); + while (llnode) { + evt = llist_entry(llnode, struct tcm_vhost_evt, list); + llnode = llist_next(llnode); + tcm_vhost_do_evt_work(vs, &evt->event); + tcm_vhost_free_evt(vs, evt); + } +} + /* Fill in status and signal that we are done processing this command * * This is scheduled in the vhost work queue so we are called with the owner @@ -804,9 +923,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) pr_debug("%s: The handling func for control queue.\n", __func__); } +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, + struct se_lun *lun, u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + evt = tcm_vhost_allocate_evt(vs, event, reason); + if (!evt) + return -ENOMEM; + + if (tpg && lun) { + /* TODO: share lun setup code with virtio-scsi.ko */ + evt->event.lun[0] = 0x01; + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; + if (lun->unpacked_lun >= 256) + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; + evt->event.lun[3] = lun->unpacked_lun & 0xFF; + } + + llist_add(&evt->list, &vs->vs_event_list); + vhost_work_queue(&vs->dev, &vs->vs_event_work); + + return 0; +} + static void vhost_scsi_evt_handle_kick(struct vhost_work *work) { - pr_debug("%s: The handling func for event queue.\n", __func__); + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, + poll.work); + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); + + if (!tcm_vhost_check_endpoint(vq)) + return; + + if (tcm_vhost_check_events_dropped(vs)) + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); + } static void vhost_scsi_handle_kick(struct vhost_work *work) @@ -888,6 +1040,7 @@ static int vhost_scsi_set_endpoint( return -EEXIST; } tv_tpg->tv_tpg_vhost_count++; + tv_tpg->vhost_scsi = vs; vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; smp_mb__after_atomic_inc(); match = true; @@ -971,6 +1124,7 @@ static int vhost_scsi_clear_endpoint( goto err_tpg; } tv_tpg->tv_tpg_vhost_count--; + tv_tpg->vhost_scsi = NULL; vs->vs_tpg[target] = NULL; match = true; mutex_unlock(&tv_tpg->tv_tpg_mutex); @@ -1030,6 +1184,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) return -ENOMEM; vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); + + s->vs_events_nr = 0; + s->vs_events_dropped = false; + mutex_init(&s->vs_events_lock); s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; @@ -1160,6 +1319,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) return "Unknown"; } +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, + struct se_lun *lun, bool plug) +{ + + struct vhost_scsi *vs = tpg->vhost_scsi; + u32 reason; + + mutex_lock(&tpg->tv_tpg_mutex); + vs = tpg->vhost_scsi; + mutex_unlock(&tpg->tv_tpg_mutex); + if (!vs) + return -EOPNOTSUPP; + + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) + return -EOPNOTSUPP; + + if (plug) + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; + else + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; + + return tcm_vhost_send_evt(vs, tpg, lun, + VIRTIO_SCSI_T_TRANSPORT_RESET, + reason); +} + +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, true); +} + +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, false); +} + static int tcm_vhost_port_link(struct se_portal_group *se_tpg, struct se_lun *lun) { @@ -1170,18 +1365,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, tv_tpg->tv_tpg_port_count++; mutex_unlock(&tv_tpg->tv_tpg_mutex); + tcm_vhost_hotplug(tv_tpg, lun); + return 0; } static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, - struct se_lun *se_lun) + struct se_lun *lun) { struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, struct tcm_vhost_tpg, se_tpg); - mutex_lock(&tv_tpg->tv_tpg_mutex); tv_tpg->tv_tpg_port_count--; mutex_unlock(&tv_tpg->tv_tpg_mutex); + + tcm_vhost_hotunplug(tv_tpg, lun); } static struct se_node_acl *tcm_vhost_make_nodeacl( diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h index 1d2ae7a..94e9ee53 100644 --- a/drivers/vhost/tcm_vhost.h +++ b/drivers/vhost/tcm_vhost.h @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { struct se_node_acl se_node_acl; }; +struct vhost_scsi; struct tcm_vhost_tpg { /* Vhost port target portal group tag for TCM */ u16 tport_tpgt; @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { struct tcm_vhost_tport *tport; /* Returned by tcm_vhost_make_tpg() */ struct se_portal_group se_tpg; + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ + struct vhost_scsi *vhost_scsi; }; struct tcm_vhost_tport { @@ -83,6 +86,13 @@ struct tcm_vhost_tport { struct se_wwn tport_wwn; }; +struct tcm_vhost_evt { + /* virtio_scsi event */ + struct virtio_scsi_event event; + /* virtio_scsi event list, serviced from vhost worker thread */ + struct llist_node list; +}; + /* * As per request from MST, keep TCM_VHOST related ioctl defines out of * linux/vhost.h (user-space) for now.. -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH v5 0/3] tcm_vhost hotplug 2013-04-09 8:51 [PATCH v5 0/3] tcm_vhost hotplug Asias He ` (2 preceding siblings ...) 2013-04-09 8:51 ` [PATCH v5 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He @ 2013-04-12 23:10 ` Nicholas A. Bellinger [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> 4 siblings, 0 replies; 37+ messages in thread From: Nicholas A. Bellinger @ 2013-04-12 23:10 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: kvm, virtualization, Stefan Hajnoczi, Paolo Bonzini Hi MST, Ping on this series for use with vhost-scsi-pci code..? They will need to be heading into target-pending/for-next soon to make the v3.10 window opening in t-minus 9 days. ;) --nab On Tue, 2013-04-09 at 16:51 +0800, Asias He wrote: > Asias He (3): > tcm_vhost: Introduce tcm_vhost_check_feature() > tcm_vhost: Add helper to check if endpoint is setup > tcm_vhost: Add hotplug/hotunplug support > > drivers/vhost/tcm_vhost.c | 236 +++++++++++++++++++++++++++++++++++++++++++++- > drivers/vhost/tcm_vhost.h | 10 ++ > 2 files changed, 242 insertions(+), 4 deletions(-) > ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <1365808250.26011.68.camel@haakon2.linux-iscsi.org>]
* [PATCH v6 0/3] tcm_vhost hotplug [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> @ 2013-04-13 3:20 ` Asias He 2013-04-13 3:20 ` [PATCH v6 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He ` (4 subsequent siblings) 5 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-13 3:20 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Rebase + small argument pass change Asias He (3): tcm_vhost: Introduce tcm_vhost_check_feature() tcm_vhost: Add helper to check if endpoint is setup tcm_vhost: Add hotplug/hotunplug support drivers/vhost/tcm_vhost.c | 237 +++++++++++++++++++++++++++++++++++++++++++++- drivers/vhost/tcm_vhost.h | 10 ++ 2 files changed, 243 insertions(+), 4 deletions(-) -- 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v6 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> 2013-04-13 3:20 ` [PATCH v6 " Asias He @ 2013-04-13 3:20 ` Asias He 2013-04-13 3:20 ` [PATCH v6 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He ` (3 subsequent siblings) 5 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-13 3:20 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini This helper is useful to check if a feature is supported. Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 957a0b9..88ebb79 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; } +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) +{ + bool ret = false; + + mutex_lock(&vs->dev.mutex); + if (vhost_has_feature(&vs->dev, feature)) + ret = true; + mutex_unlock(&vs->dev.mutex); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v6 2/3] tcm_vhost: Add helper to check if endpoint is setup [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> 2013-04-13 3:20 ` [PATCH v6 " Asias He 2013-04-13 3:20 ` [PATCH v6 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He @ 2013-04-13 3:20 ` Asias He 2013-04-13 3:20 ` [PATCH v6 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He ` (2 subsequent siblings) 5 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-13 3:20 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Signed-off-by: Asias He <asias@redhat.com> --- drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 88ebb79..8f05528 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) return ret; } +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) +{ + bool ret = false; + + /* + * We can handle the vq only after the endpoint is setup by calling the + * VHOST_SCSI_SET_ENDPOINT ioctl. + * + * TODO: Check that we are running from vhost_worker which acts + * as read-side critical section for vhost kind of RCU. + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h + */ + if (rcu_dereference_check(vq->private_data, 1)) + ret = true; + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v6 3/3] tcm_vhost: Add hotplug/hotunplug support [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> ` (2 preceding siblings ...) 2013-04-13 3:20 ` [PATCH v6 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He @ 2013-04-13 3:20 ` Asias He 2013-04-14 8:53 ` [PATCH v5 0/3] tcm_vhost hotplug Michael S. Tsirkin 2013-04-17 13:17 ` Michael S. Tsirkin 5 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-13 3:20 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for virtio-scsi), hotplug support is added to virtio-scsi. This patch adds hotplug and hotunplug support to tcm_vhost. You can create or delete a LUN in targetcli to hotplug or hotunplug a LUN in guest. Changes in v6: - Pass tcm_vhost_evt to tcm_vhost_do_evt_work Changes in v5: - Switch to int from u64 to vs_events_nr - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt - Do not nest dev mutex within vq mutex - Use vs_events_lock to protect vs_events_dropped and vs_events_nr - Rebase to target/master Changes in v4: - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick Changes in v3: - Separate the bug fix to another thread Changes in v2: - Remove code duplication in tcm_vhost_{hotplug,hotunplug} - Fix racing of vs_events_nr - Add flush fix patch to this series Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 207 +++++++++++++++++++++++++++++++++++++++++++++- drivers/vhost/tcm_vhost.h | 10 +++ 2 files changed, 213 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 8f05528..aa457d2 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -66,11 +66,13 @@ enum { * TODO: debug and remove the workaround. */ enum { - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | + (1ULL << VIRTIO_SCSI_F_HOTPLUG) }; #define VHOST_SCSI_MAX_TARGET 256 #define VHOST_SCSI_MAX_VQ 128 +#define VHOST_SCSI_MAX_EVENT 128 struct vhost_scsi { /* Protected by vhost_scsi->dev.mutex */ @@ -82,6 +84,13 @@ struct vhost_scsi { struct vhost_work vs_completion_work; /* cmd completion work item */ struct llist_head vs_completion_list; /* cmd completion queue */ + + struct vhost_work vs_event_work; /* evt injection work item */ + struct llist_head vs_event_list; /* evt injection queue */ + + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ + bool vs_events_dropped; /* any missed events */ + int vs_events_nr; /* num of pending events */ }; /* Local pointer to allocated TCM configfs fabric module */ @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) return ret; } +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) +{ + bool ret; + + mutex_lock(&vs->vs_events_lock); + ret = vs->vs_events_dropped; + mutex_unlock(&vs->vs_events_lock); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) return 0; } +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) +{ + mutex_lock(&vs->vs_events_lock); + vs->vs_events_nr--; + kfree(evt); + mutex_unlock(&vs->vs_events_lock); +} + +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, + u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + mutex_lock(&vs->vs_events_lock); + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { + vs->vs_events_dropped = true; + mutex_unlock(&vs->vs_events_lock); + return NULL; + } + + evt = kzalloc(sizeof(*evt), GFP_KERNEL); + if (evt) { + evt->event.event = event; + evt->event.reason = reason; + vs->vs_events_nr++; + } + mutex_unlock(&vs->vs_events_lock); + + return evt; +} + static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) { struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) kfree(tv_cmd); } +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, + struct tcm_vhost_evt *evt) +{ + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; + struct virtio_scsi_event *event = &evt->event; + struct virtio_scsi_event __user *eventp; + unsigned out, in; + int head, ret; + + if (!tcm_vhost_check_endpoint(vq)) + return; + + mutex_lock(&vs->vs_events_lock); + mutex_lock(&vq->mutex); +again: + vhost_disable_notify(&vs->dev, vq); + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, + ARRAY_SIZE(vq->iov), &out, &in, + NULL, NULL); + if (head < 0) { + vs->vs_events_dropped = true; + goto out; + } + if (head == vq->num) { + if (vhost_enable_notify(&vs->dev, vq)) + goto again; + vs->vs_events_dropped = true; + goto out; + } + + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", + vq->iov[out].iov_len); + vs->vs_events_dropped = true; + goto out; + } + + if (vs->vs_events_dropped) { + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; + vs->vs_events_dropped = false; + } + + eventp = vq->iov[out].iov_base; + ret = __copy_to_user(eventp, event, sizeof(*event)); + if (!ret) + vhost_add_used_and_signal(&vs->dev, vq, head, 0); + else + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); +out: + mutex_unlock(&vq->mutex); + mutex_unlock(&vs->vs_events_lock); +} + +static void tcm_vhost_evt_work(struct vhost_work *work) +{ + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, + vs_event_work); + struct tcm_vhost_evt *evt; + struct llist_node *llnode; + + llnode = llist_del_all(&vs->vs_event_list); + while (llnode) { + evt = llist_entry(llnode, struct tcm_vhost_evt, list); + llnode = llist_next(llnode); + tcm_vhost_do_evt_work(vs, evt); + tcm_vhost_free_evt(vs, evt); + } +} + /* Fill in status and signal that we are done processing this command * * This is scheduled in the vhost work queue so we are called with the owner @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) pr_debug("%s: The handling func for control queue.\n", __func__); } +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, + struct se_lun *lun, u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + evt = tcm_vhost_allocate_evt(vs, event, reason); + if (!evt) + return -ENOMEM; + + if (tpg && lun) { + /* TODO: share lun setup code with virtio-scsi.ko */ + evt->event.lun[0] = 0x01; + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; + if (lun->unpacked_lun >= 256) + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; + evt->event.lun[3] = lun->unpacked_lun & 0xFF; + } + + llist_add(&evt->list, &vs->vs_event_list); + vhost_work_queue(&vs->dev, &vs->vs_event_work); + + return 0; +} + static void vhost_scsi_evt_handle_kick(struct vhost_work *work) { - pr_debug("%s: The handling func for event queue.\n", __func__); + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, + poll.work); + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); + + if (!tcm_vhost_check_endpoint(vq)) + return; + + if (tcm_vhost_check_events_dropped(vs)) + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); + } static void vhost_scsi_handle_kick(struct vhost_work *work) @@ -891,6 +1044,7 @@ static int vhost_scsi_set_endpoint( return -EEXIST; } tv_tpg->tv_tpg_vhost_count++; + tv_tpg->vhost_scsi = vs; vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; smp_mb__after_atomic_inc(); match = true; @@ -974,6 +1128,7 @@ static int vhost_scsi_clear_endpoint( goto err_tpg; } tv_tpg->tv_tpg_vhost_count--; + tv_tpg->vhost_scsi = NULL; vs->vs_tpg[target] = NULL; match = true; mutex_unlock(&tv_tpg->tv_tpg_mutex); @@ -1033,6 +1188,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) return -ENOMEM; vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); + + s->vs_events_nr = 0; + s->vs_events_dropped = false; + mutex_init(&s->vs_events_lock); s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; @@ -1163,6 +1323,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) return "Unknown"; } +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, + struct se_lun *lun, bool plug) +{ + + struct vhost_scsi *vs = tpg->vhost_scsi; + u32 reason; + + mutex_lock(&tpg->tv_tpg_mutex); + vs = tpg->vhost_scsi; + mutex_unlock(&tpg->tv_tpg_mutex); + if (!vs) + return -EOPNOTSUPP; + + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) + return -EOPNOTSUPP; + + if (plug) + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; + else + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; + + return tcm_vhost_send_evt(vs, tpg, lun, + VIRTIO_SCSI_T_TRANSPORT_RESET, + reason); +} + +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, true); +} + +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, false); +} + static int tcm_vhost_port_link(struct se_portal_group *se_tpg, struct se_lun *lun) { @@ -1173,18 +1369,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, tv_tpg->tv_tpg_port_count++; mutex_unlock(&tv_tpg->tv_tpg_mutex); + tcm_vhost_hotplug(tv_tpg, lun); + return 0; } static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, - struct se_lun *se_lun) + struct se_lun *lun) { struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, struct tcm_vhost_tpg, se_tpg); - mutex_lock(&tv_tpg->tv_tpg_mutex); tv_tpg->tv_tpg_port_count--; mutex_unlock(&tv_tpg->tv_tpg_mutex); + + tcm_vhost_hotunplug(tv_tpg, lun); } static struct se_node_acl *tcm_vhost_make_nodeacl( diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h index 1d2ae7a..94e9ee53 100644 --- a/drivers/vhost/tcm_vhost.h +++ b/drivers/vhost/tcm_vhost.h @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { struct se_node_acl se_node_acl; }; +struct vhost_scsi; struct tcm_vhost_tpg { /* Vhost port target portal group tag for TCM */ u16 tport_tpgt; @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { struct tcm_vhost_tport *tport; /* Returned by tcm_vhost_make_tpg() */ struct se_portal_group se_tpg; + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ + struct vhost_scsi *vhost_scsi; }; struct tcm_vhost_tport { @@ -83,6 +86,13 @@ struct tcm_vhost_tport { struct se_wwn tport_wwn; }; +struct tcm_vhost_evt { + /* virtio_scsi event */ + struct virtio_scsi_event event; + /* virtio_scsi event list, serviced from vhost worker thread */ + struct llist_node list; +}; + /* * As per request from MST, keep TCM_VHOST related ioctl defines out of * linux/vhost.h (user-space) for now.. -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH v5 0/3] tcm_vhost hotplug [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> ` (3 preceding siblings ...) 2013-04-13 3:20 ` [PATCH v6 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He @ 2013-04-14 8:53 ` Michael S. Tsirkin 2013-04-17 13:17 ` Michael S. Tsirkin 5 siblings, 0 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-14 8:53 UTC (permalink / raw) To: Nicholas A. Bellinger; +Cc: kvm, virtualization, Stefan Hajnoczi, Paolo Bonzini On Fri, Apr 12, 2013 at 04:10:50PM -0700, Nicholas A. Bellinger wrote: > Hi MST, > > Ping on this series for use with vhost-scsi-pci code..? > > They will need to be heading into target-pending/for-next soon to make > the v3.10 window opening in t-minus 9 days. ;) > > --nab Let me turns this around and ask you what's the target for QEMU userspace? > On Tue, 2013-04-09 at 16:51 +0800, Asias He wrote: > > Asias He (3): > > tcm_vhost: Introduce tcm_vhost_check_feature() > > tcm_vhost: Add helper to check if endpoint is setup > > tcm_vhost: Add hotplug/hotunplug support > > > > drivers/vhost/tcm_vhost.c | 236 +++++++++++++++++++++++++++++++++++++++++++++- > > drivers/vhost/tcm_vhost.h | 10 ++ > > 2 files changed, 242 insertions(+), 4 deletions(-) > > > ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v5 0/3] tcm_vhost hotplug [not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org> ` (4 preceding siblings ...) 2013-04-14 8:53 ` [PATCH v5 0/3] tcm_vhost hotplug Michael S. Tsirkin @ 2013-04-17 13:17 ` Michael S. Tsirkin 2013-04-18 1:05 ` [PATCH v7 " Asias He ` (7 more replies) 5 siblings, 8 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-17 13:17 UTC (permalink / raw) To: Nicholas A. Bellinger; +Cc: kvm, virtualization, Stefan Hajnoczi, Paolo Bonzini On Fri, Apr 12, 2013 at 04:10:50PM -0700, Nicholas A. Bellinger wrote: > Hi MST, > > Ping on this series for use with vhost-scsi-pci code..? > > They will need to be heading into target-pending/for-next soon to make > the v3.10 window opening in t-minus 9 days. ;) > > --nab > > On Tue, 2013-04-09 at 16:51 +0800, Asias He wrote: > > Asias He (3): > > tcm_vhost: Introduce tcm_vhost_check_feature() > > tcm_vhost: Add helper to check if endpoint is setup > > tcm_vhost: Add hotplug/hotunplug support > > > > drivers/vhost/tcm_vhost.c | 236 +++++++++++++++++++++++++++++++++++++++++++++- > > drivers/vhost/tcm_vhost.h | 10 ++ > > 2 files changed, 242 insertions(+), 4 deletions(-) > > Okay, Paolo says it's now a sure thing we'll use it in qemu, in fact in 1.5, so let's cross the fingers and start merging new code. Acked-by: Michael S. Tsirkin <mst@redhat.com> ^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v7 0/3] tcm_vhost hotplug 2013-04-17 13:17 ` Michael S. Tsirkin @ 2013-04-18 1:05 ` Asias He 2013-04-18 1:05 ` [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He ` (6 subsequent siblings) 7 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-18 1:05 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Changes in v7: - Add vhost_work_flush for vs->vs_event_work to this series Asias He (3): tcm_vhost: Introduce tcm_vhost_check_feature() tcm_vhost: Add helper to check if endpoint is setup tcm_vhost: Add hotplug/hotunplug support drivers/vhost/tcm_vhost.c | 238 +++++++++++++++++++++++++++++++++++++++++++++- drivers/vhost/tcm_vhost.h | 10 ++ 2 files changed, 244 insertions(+), 4 deletions(-) -- 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() 2013-04-17 13:17 ` Michael S. Tsirkin 2013-04-18 1:05 ` [PATCH v7 " Asias He @ 2013-04-18 1:05 ` Asias He 2013-04-18 7:06 ` Michael S. Tsirkin [not found] ` <20130418070606.GC13787@redhat.com> 2013-04-18 1:05 ` [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He ` (5 subsequent siblings) 7 siblings, 2 replies; 37+ messages in thread From: Asias He @ 2013-04-18 1:05 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini This helper is useful to check if a feature is supported. Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 957a0b9..88ebb79 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; } +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) +{ + bool ret = false; + + mutex_lock(&vs->dev.mutex); + if (vhost_has_feature(&vs->dev, feature)) + ret = true; + mutex_unlock(&vs->dev.mutex); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() 2013-04-18 1:05 ` [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He @ 2013-04-18 7:06 ` Michael S. Tsirkin [not found] ` <20130418070606.GC13787@redhat.com> 1 sibling, 0 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:06 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 09:05:52AM +0800, Asias He wrote: > This helper is useful to check if a feature is supported. > > Signed-off-by: Asias He <asias@redhat.com> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Why do we need the locking here by the way? It seems features don't change while vhost is running. > --- > drivers/vhost/tcm_vhost.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > index 957a0b9..88ebb79 100644 > --- a/drivers/vhost/tcm_vhost.c > +++ b/drivers/vhost/tcm_vhost.c > @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) > ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; > } > > +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > +{ > + bool ret = false; > + > + mutex_lock(&vs->dev.mutex); > + if (vhost_has_feature(&vs->dev, feature)) > + ret = true; > + mutex_unlock(&vs->dev.mutex); > + > + return ret; > +} > + > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > { > return 1; > -- > 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <20130418070606.GC13787@redhat.com>]
* Re: [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() [not found] ` <20130418070606.GC13787@redhat.com> @ 2013-04-18 8:25 ` Asias He 2013-04-18 7:44 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-18 8:25 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 10:06:06AM +0300, Michael S. Tsirkin wrote: > On Thu, Apr 18, 2013 at 09:05:52AM +0800, Asias He wrote: > > This helper is useful to check if a feature is supported. > > > > Signed-off-by: Asias He <asias@redhat.com> > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > Why do we need the locking here by the way? > It seems features don't change while vhost > is running. See: static inline int vhost_has_feature(struct vhost_dev *dev, int bit) { unsigned acked_features; /* TODO: check that we are running from vhost_worker or dev mutex is * held? */ acked_features = rcu_dereference_index_check(dev->acked_features, 1); return acked_features & (1 << bit); } We are accessing it outside the vhost_worker thread. > > --- > > drivers/vhost/tcm_vhost.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > index 957a0b9..88ebb79 100644 > > --- a/drivers/vhost/tcm_vhost.c > > +++ b/drivers/vhost/tcm_vhost.c > > @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) > > ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; > > } > > > > +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > +{ > > + bool ret = false; > > + > > + mutex_lock(&vs->dev.mutex); > > + if (vhost_has_feature(&vs->dev, feature)) > > + ret = true; > > + mutex_unlock(&vs->dev.mutex); > > + > > + return ret; > > +} > > + > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > { > > return 1; > > -- > > 1.8.1.4 -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() 2013-04-18 8:25 ` Asias He @ 2013-04-18 7:44 ` Michael S. Tsirkin 0 siblings, 0 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:44 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 04:25:21PM +0800, Asias He wrote: > On Thu, Apr 18, 2013 at 10:06:06AM +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 18, 2013 at 09:05:52AM +0800, Asias He wrote: > > > This helper is useful to check if a feature is supported. > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > Why do we need the locking here by the way? > > It seems features don't change while vhost > > is running. > > See: > > static inline int vhost_has_feature(struct vhost_dev *dev, int bit) > { > unsigned acked_features; > > /* TODO: check that we are running from vhost_worker or dev mutex is > * held? */ > acked_features = rcu_dereference_index_check(dev->acked_features, 1); > return acked_features & (1 << bit); > } > > We are accessing it outside the vhost_worker thread. Hmm I see, we allow changing features on the fly, and the reason is the LOG_ALL feature. Okay ... do we ever call this from worker thread? Because if we do, this will deadlock. Maybe add a comment about this. > > > --- > > > drivers/vhost/tcm_vhost.c | 12 ++++++++++++ > > > 1 file changed, 12 insertions(+) > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > index 957a0b9..88ebb79 100644 > > > --- a/drivers/vhost/tcm_vhost.c > > > +++ b/drivers/vhost/tcm_vhost.c > > > @@ -99,6 +99,18 @@ static int iov_num_pages(struct iovec *iov) > > > ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT; > > > } > > > > > > +static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > +{ > > > + bool ret = false; > > > + > > > + mutex_lock(&vs->dev.mutex); > > > + if (vhost_has_feature(&vs->dev, feature)) > > > + ret = true; > > > + mutex_unlock(&vs->dev.mutex); > > > + > > > + return ret; > > > +} > > > + > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > { > > > return 1; > > > -- > > > 1.8.1.4 > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-17 13:17 ` Michael S. Tsirkin 2013-04-18 1:05 ` [PATCH v7 " Asias He 2013-04-18 1:05 ` [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He @ 2013-04-18 1:05 ` Asias He 2013-04-18 1:05 ` [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He ` (4 subsequent siblings) 7 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-18 1:05 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Signed-off-by: Asias He <asias@redhat.com> --- drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 88ebb79..8f05528 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) return ret; } +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) +{ + bool ret = false; + + /* + * We can handle the vq only after the endpoint is setup by calling the + * VHOST_SCSI_SET_ENDPOINT ioctl. + * + * TODO: Check that we are running from vhost_worker which acts + * as read-side critical section for vhost kind of RCU. + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h + */ + if (rcu_dereference_check(vq->private_data, 1)) + ret = true; + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-17 13:17 ` Michael S. Tsirkin ` (2 preceding siblings ...) 2013-04-18 1:05 ` [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He @ 2013-04-18 1:05 ` Asias He 2013-04-18 1:09 ` [PATCH v5 0/3] tcm_vhost hotplug Asias He ` (3 subsequent siblings) 7 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-18 1:05 UTC (permalink / raw) To: Nicholas Bellinger Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for virtio-scsi), hotplug support is added to virtio-scsi. This patch adds hotplug and hotunplug support to tcm_vhost. You can create or delete a LUN in targetcli to hotplug or hotunplug a LUN in guest. Changes in v7: - Add vhost_work_flush for vs->vs_event_work to this series Changes in v6: - Pass tcm_vhost_evt to tcm_vhost_do_evt_work Changes in v5: - Switch to int from u64 to vs_events_nr - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt - Do not nest dev mutex within vq mutex - Use vs_events_lock to protect vs_events_dropped and vs_events_nr - Rebase to target/master Changes in v4: - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick Changes in v3: - Separate the bug fix to another thread Changes in v2: - Remove code duplication in tcm_vhost_{hotplug,hotunplug} - Fix racing of vs_events_nr - Add flush fix patch to this series Signed-off-by: Asias He <asias@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- drivers/vhost/tcm_vhost.h | 10 +++ 2 files changed, 214 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c index 8f05528..da2021b 100644 --- a/drivers/vhost/tcm_vhost.c +++ b/drivers/vhost/tcm_vhost.c @@ -66,11 +66,13 @@ enum { * TODO: debug and remove the workaround. */ enum { - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | + (1ULL << VIRTIO_SCSI_F_HOTPLUG) }; #define VHOST_SCSI_MAX_TARGET 256 #define VHOST_SCSI_MAX_VQ 128 +#define VHOST_SCSI_MAX_EVENT 128 struct vhost_scsi { /* Protected by vhost_scsi->dev.mutex */ @@ -82,6 +84,13 @@ struct vhost_scsi { struct vhost_work vs_completion_work; /* cmd completion work item */ struct llist_head vs_completion_list; /* cmd completion queue */ + + struct vhost_work vs_event_work; /* evt injection work item */ + struct llist_head vs_event_list; /* evt injection queue */ + + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ + bool vs_events_dropped; /* any missed events */ + int vs_events_nr; /* num of pending events */ }; /* Local pointer to allocated TCM configfs fabric module */ @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) return ret; } +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) +{ + bool ret; + + mutex_lock(&vs->vs_events_lock); + ret = vs->vs_events_dropped; + mutex_unlock(&vs->vs_events_lock); + + return ret; +} + static int tcm_vhost_check_true(struct se_portal_group *se_tpg) { return 1; @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) return 0; } +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) +{ + mutex_lock(&vs->vs_events_lock); + vs->vs_events_nr--; + kfree(evt); + mutex_unlock(&vs->vs_events_lock); +} + +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, + u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + mutex_lock(&vs->vs_events_lock); + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { + vs->vs_events_dropped = true; + mutex_unlock(&vs->vs_events_lock); + return NULL; + } + + evt = kzalloc(sizeof(*evt), GFP_KERNEL); + if (evt) { + evt->event.event = event; + evt->event.reason = reason; + vs->vs_events_nr++; + } + mutex_unlock(&vs->vs_events_lock); + + return evt; +} + static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) { struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) kfree(tv_cmd); } +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, + struct tcm_vhost_evt *evt) +{ + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; + struct virtio_scsi_event *event = &evt->event; + struct virtio_scsi_event __user *eventp; + unsigned out, in; + int head, ret; + + if (!tcm_vhost_check_endpoint(vq)) + return; + + mutex_lock(&vs->vs_events_lock); + mutex_lock(&vq->mutex); +again: + vhost_disable_notify(&vs->dev, vq); + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, + ARRAY_SIZE(vq->iov), &out, &in, + NULL, NULL); + if (head < 0) { + vs->vs_events_dropped = true; + goto out; + } + if (head == vq->num) { + if (vhost_enable_notify(&vs->dev, vq)) + goto again; + vs->vs_events_dropped = true; + goto out; + } + + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", + vq->iov[out].iov_len); + vs->vs_events_dropped = true; + goto out; + } + + if (vs->vs_events_dropped) { + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; + vs->vs_events_dropped = false; + } + + eventp = vq->iov[out].iov_base; + ret = __copy_to_user(eventp, event, sizeof(*event)); + if (!ret) + vhost_add_used_and_signal(&vs->dev, vq, head, 0); + else + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); +out: + mutex_unlock(&vq->mutex); + mutex_unlock(&vs->vs_events_lock); +} + +static void tcm_vhost_evt_work(struct vhost_work *work) +{ + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, + vs_event_work); + struct tcm_vhost_evt *evt; + struct llist_node *llnode; + + llnode = llist_del_all(&vs->vs_event_list); + while (llnode) { + evt = llist_entry(llnode, struct tcm_vhost_evt, list); + llnode = llist_next(llnode); + tcm_vhost_do_evt_work(vs, evt); + tcm_vhost_free_evt(vs, evt); + } +} + /* Fill in status and signal that we are done processing this command * * This is scheduled in the vhost work queue so we are called with the owner @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) pr_debug("%s: The handling func for control queue.\n", __func__); } +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, + struct se_lun *lun, u32 event, u32 reason) +{ + struct tcm_vhost_evt *evt; + + evt = tcm_vhost_allocate_evt(vs, event, reason); + if (!evt) + return -ENOMEM; + + if (tpg && lun) { + /* TODO: share lun setup code with virtio-scsi.ko */ + evt->event.lun[0] = 0x01; + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; + if (lun->unpacked_lun >= 256) + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; + evt->event.lun[3] = lun->unpacked_lun & 0xFF; + } + + llist_add(&evt->list, &vs->vs_event_list); + vhost_work_queue(&vs->dev, &vs->vs_event_work); + + return 0; +} + static void vhost_scsi_evt_handle_kick(struct vhost_work *work) { - pr_debug("%s: The handling func for event queue.\n", __func__); + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, + poll.work); + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); + + if (!tcm_vhost_check_endpoint(vq)) + return; + + if (tcm_vhost_check_events_dropped(vs)) + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); + } static void vhost_scsi_handle_kick(struct vhost_work *work) @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) vhost_scsi_flush_vq(vs, i); vhost_work_flush(&vs->dev, &vs->vs_completion_work); + vhost_work_flush(&vs->dev, &vs->vs_event_work); } /* @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( return -EEXIST; } tv_tpg->tv_tpg_vhost_count++; + tv_tpg->vhost_scsi = vs; vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; smp_mb__after_atomic_inc(); match = true; @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( goto err_tpg; } tv_tpg->tv_tpg_vhost_count--; + tv_tpg->vhost_scsi = NULL; vs->vs_tpg[target] = NULL; match = true; mutex_unlock(&tv_tpg->tv_tpg_mutex); @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) return -ENOMEM; vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); + + s->vs_events_nr = 0; + s->vs_events_dropped = false; + mutex_init(&s->vs_events_lock); s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) return "Unknown"; } +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, + struct se_lun *lun, bool plug) +{ + + struct vhost_scsi *vs = tpg->vhost_scsi; + u32 reason; + + mutex_lock(&tpg->tv_tpg_mutex); + vs = tpg->vhost_scsi; + mutex_unlock(&tpg->tv_tpg_mutex); + if (!vs) + return -EOPNOTSUPP; + + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) + return -EOPNOTSUPP; + + if (plug) + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; + else + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; + + return tcm_vhost_send_evt(vs, tpg, lun, + VIRTIO_SCSI_T_TRANSPORT_RESET, + reason); +} + +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, true); +} + +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) +{ + return tcm_vhost_do_plug(tpg, lun, false); +} + static int tcm_vhost_port_link(struct se_portal_group *se_tpg, struct se_lun *lun) { @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, tv_tpg->tv_tpg_port_count++; mutex_unlock(&tv_tpg->tv_tpg_mutex); + tcm_vhost_hotplug(tv_tpg, lun); + return 0; } static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, - struct se_lun *se_lun) + struct se_lun *lun) { struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, struct tcm_vhost_tpg, se_tpg); - mutex_lock(&tv_tpg->tv_tpg_mutex); tv_tpg->tv_tpg_port_count--; mutex_unlock(&tv_tpg->tv_tpg_mutex); + + tcm_vhost_hotunplug(tv_tpg, lun); } static struct se_node_acl *tcm_vhost_make_nodeacl( diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h index 1d2ae7a..94e9ee53 100644 --- a/drivers/vhost/tcm_vhost.h +++ b/drivers/vhost/tcm_vhost.h @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { struct se_node_acl se_node_acl; }; +struct vhost_scsi; struct tcm_vhost_tpg { /* Vhost port target portal group tag for TCM */ u16 tport_tpgt; @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { struct tcm_vhost_tport *tport; /* Returned by tcm_vhost_make_tpg() */ struct se_portal_group se_tpg; + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ + struct vhost_scsi *vhost_scsi; }; struct tcm_vhost_tport { @@ -83,6 +86,13 @@ struct tcm_vhost_tport { struct se_wwn tport_wwn; }; +struct tcm_vhost_evt { + /* virtio_scsi event */ + struct virtio_scsi_event event; + /* virtio_scsi event list, serviced from vhost worker thread */ + struct llist_node list; +}; + /* * As per request from MST, keep TCM_VHOST related ioctl defines out of * linux/vhost.h (user-space) for now.. -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH v5 0/3] tcm_vhost hotplug 2013-04-17 13:17 ` Michael S. Tsirkin ` (3 preceding siblings ...) 2013-04-18 1:05 ` [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He @ 2013-04-18 1:09 ` Asias He [not found] ` <1366247154-3136-3-git-send-email-asias@redhat.com> ` (2 subsequent siblings) 7 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-18 1:09 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: kvm, virtualization, Stefan Hajnoczi, Paolo Bonzini On Wed, Apr 17, 2013 at 04:17:26PM +0300, Michael S. Tsirkin wrote: > On Fri, Apr 12, 2013 at 04:10:50PM -0700, Nicholas A. Bellinger wrote: > > Hi MST, > > > > Ping on this series for use with vhost-scsi-pci code..? > > > > They will need to be heading into target-pending/for-next soon to make > > the v3.10 window opening in t-minus 9 days. ;) > > > > --nab > > > > On Tue, 2013-04-09 at 16:51 +0800, Asias He wrote: > > > Asias He (3): > > > tcm_vhost: Introduce tcm_vhost_check_feature() > > > tcm_vhost: Add helper to check if endpoint is setup > > > tcm_vhost: Add hotplug/hotunplug support > > > > > > drivers/vhost/tcm_vhost.c | 236 +++++++++++++++++++++++++++++++++++++++++++++- > > > drivers/vhost/tcm_vhost.h | 10 ++ > > > 2 files changed, 242 insertions(+), 4 deletions(-) > > > > > Okay, Paolo says it's now a sure thing we'll use it in qemu, in fact in > 1.5, so let's cross the fingers and start merging new code. > > Acked-by: Michael S. Tsirkin <mst@redhat.com> Nicholas, please take v7. It contains one hunk belongs to this series. -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <1366247154-3136-3-git-send-email-asias@redhat.com>]
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup [not found] ` <1366247154-3136-3-git-send-email-asias@redhat.com> @ 2013-04-18 7:09 ` Michael S. Tsirkin 2013-04-18 8:32 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:09 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > Signed-off-by: Asias He <asias@redhat.com> > --- > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > index 88ebb79..8f05528 100644 > --- a/drivers/vhost/tcm_vhost.c > +++ b/drivers/vhost/tcm_vhost.c > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > return ret; > } > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > +{ > + bool ret = false; > + > + /* > + * We can handle the vq only after the endpoint is setup by calling the > + * VHOST_SCSI_SET_ENDPOINT ioctl. > + * > + * TODO: Check that we are running from vhost_worker which acts > + * as read-side critical section for vhost kind of RCU. > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > + */ > + if (rcu_dereference_check(vq->private_data, 1)) > + ret = true; > + > + return ret; > +} > + You can't use RCU in this way. You need to actually derefence. Instead, just move this within vq mutex and do rcu_dereference_protected. document that this function requires vq mutex. > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > { > return 1; > -- > 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-18 7:09 ` [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup Michael S. Tsirkin @ 2013-04-18 8:32 ` Asias He 2013-04-18 7:38 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-18 8:32 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > Signed-off-by: Asias He <asias@redhat.com> > > --- > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > 1 file changed, 18 insertions(+) > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > index 88ebb79..8f05528 100644 > > --- a/drivers/vhost/tcm_vhost.c > > +++ b/drivers/vhost/tcm_vhost.c > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > return ret; > > } > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > +{ > > + bool ret = false; > > + > > + /* > > + * We can handle the vq only after the endpoint is setup by calling the > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > + * > > + * TODO: Check that we are running from vhost_worker which acts > > + * as read-side critical section for vhost kind of RCU. > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > + */ > > + if (rcu_dereference_check(vq->private_data, 1)) > > + ret = true; > > + > > + return ret; > > +} > > + > > You can't use RCU in this way. You need to actually > derefence. Instead, just move this within vq mutex > and do rcu_dereference_protected. document that this function > requires vq mutex. Any difference with: vs_tpg = rcu_dereference_check(vq->private_data, 1); if (vs_tpg) return ture else return false; Note, tcm_vhost_check_endpoint() is called in multiple places. Having a helper is more convenient. > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > { > > return 1; > > -- > > 1.8.1.4 -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-18 8:32 ` Asias He @ 2013-04-18 7:38 ` Michael S. Tsirkin 2013-04-22 8:53 ` Asias He [not found] ` <20130422085327.GA21692@hj.localdomain> 0 siblings, 2 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:38 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 04:32:30PM +0800, Asias He wrote: > On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > > Signed-off-by: Asias He <asias@redhat.com> > > > --- > > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > > 1 file changed, 18 insertions(+) > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > index 88ebb79..8f05528 100644 > > > --- a/drivers/vhost/tcm_vhost.c > > > +++ b/drivers/vhost/tcm_vhost.c > > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > return ret; > > > } > > > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > +{ > > > + bool ret = false; > > > + > > > + /* > > > + * We can handle the vq only after the endpoint is setup by calling the > > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > > + * > > > + * TODO: Check that we are running from vhost_worker which acts > > > + * as read-side critical section for vhost kind of RCU. > > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > > + */ > > > + if (rcu_dereference_check(vq->private_data, 1)) > > > + ret = true; > > > + > > > + return ret; > > > +} > > > + > > > > You can't use RCU in this way. You need to actually > > derefence. Instead, just move this within vq mutex > > and do rcu_dereference_protected. document that this function > > requires vq mutex. > > Any difference with: > > vs_tpg = rcu_dereference_check(vq->private_data, 1); Yes, it's not the same. rcu_dereference_protected says "this is generally RCU but here I use it under lock". rcu_dereference_check can only be used if you really dereference later, and you don't. > if (vs_tpg) > return ture > else > return false; > > Note, tcm_vhost_check_endpoint() is called in multiple places. Having a > helper is more convenient. I'm fine with the helper, but fix the implementation and fix all callers to have vq mutex when calling. > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > { > > > return 1; > > > -- > > > 1.8.1.4 > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-18 7:38 ` Michael S. Tsirkin @ 2013-04-22 8:53 ` Asias He [not found] ` <20130422085327.GA21692@hj.localdomain> 1 sibling, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-22 8:53 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 10:38:23AM +0300, Michael S. Tsirkin wrote: > On Thu, Apr 18, 2013 at 04:32:30PM +0800, Asias He wrote: > > On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > > > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > --- > > > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > > > 1 file changed, 18 insertions(+) > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > index 88ebb79..8f05528 100644 > > > > --- a/drivers/vhost/tcm_vhost.c > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > > return ret; > > > > } > > > > > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > +{ > > > > + bool ret = false; > > > > + > > > > + /* > > > > + * We can handle the vq only after the endpoint is setup by calling the > > > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > > > + * > > > > + * TODO: Check that we are running from vhost_worker which acts > > > > + * as read-side critical section for vhost kind of RCU. > > > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > > > + */ > > > > + if (rcu_dereference_check(vq->private_data, 1)) > > > > + ret = true; > > > > + > > > > + return ret; > > > > +} > > > > + > > > > > > You can't use RCU in this way. You need to actually > > > derefence. Instead, just move this within vq mutex > > > and do rcu_dereference_protected. document that this function > > > requires vq mutex. > > > > Any difference with: > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > Yes, it's not the same. rcu_dereference_protected says > "this is generally RCU but here I use it under lock". > rcu_dereference_check can only be used if you > really dereference later, and you don't. So you want the helper to be this? static bool tcm_vhost_check_endpoint() { struct tcm_vhost_tpg **vs_tpg; vs_tpg = rcu_dereference_check(vq->private_data, 1); if (vs_tpg) return true; else return false } And what difference code the compiler will generate with this: static bool tcm_vhost_check_endpoint() { if (rcu_dereference_check(vq->private_data, 1)) return true else return false } > > > if (vs_tpg) > > return ture > > else > > return false; > > > > Note, tcm_vhost_check_endpoint() is called in multiple places. Having a > > helper is more convenient. > > I'm fine with the helper, but fix the implementation > and fix all callers to have vq mutex when calling. All the caller are in the vhost thread. Why need vq mutex to be taken? > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > { > > > > return 1; > > > > -- > > > > 1.8.1.4 > > > > -- > > Asias -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <20130422085327.GA21692@hj.localdomain>]
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup [not found] ` <20130422085327.GA21692@hj.localdomain> @ 2013-04-22 13:28 ` Michael S. Tsirkin 2013-04-22 15:00 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-22 13:28 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Mon, Apr 22, 2013 at 04:53:27PM +0800, Asias He wrote: > On Thu, Apr 18, 2013 at 10:38:23AM +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 18, 2013 at 04:32:30PM +0800, Asias He wrote: > > > On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > > > > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > --- > > > > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > > > > 1 file changed, 18 insertions(+) > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > index 88ebb79..8f05528 100644 > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > > > return ret; > > > > > } > > > > > > > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > +{ > > > > > + bool ret = false; > > > > > + > > > > > + /* > > > > > + * We can handle the vq only after the endpoint is setup by calling the > > > > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > > > > + * > > > > > + * TODO: Check that we are running from vhost_worker which acts > > > > > + * as read-side critical section for vhost kind of RCU. > > > > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > > > > + */ > > > > > + if (rcu_dereference_check(vq->private_data, 1)) > > > > > + ret = true; > > > > > + > > > > > + return ret; > > > > > +} > > > > > + > > > > > > > > You can't use RCU in this way. You need to actually > > > > derefence. Instead, just move this within vq mutex > > > > and do rcu_dereference_protected. document that this function > > > > requires vq mutex. > > > > > > Any difference with: > > > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > > > Yes, it's not the same. rcu_dereference_protected says > > "this is generally RCU but here I use it under lock". > > rcu_dereference_check can only be used if you > > really dereference later, and you don't. > > So you want the helper to be this? > > static bool tcm_vhost_check_endpoint() > { > struct tcm_vhost_tpg **vs_tpg; > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > if (vs_tpg) > return true; > else > return false > } > > And what difference code the compiler will generate with this: > > static bool tcm_vhost_check_endpoint() > { > > if (rcu_dereference_check(vq->private_data, 1)) > return true > else > return false > } > No, what I want is that private data is either dereferenced (not tested against NULL) or used under vq mutex. In this case the second option is the easiest and the cleanest. So move it under mutex and then test it safely without rcu. > > > > > if (vs_tpg) > > > return ture > > > else > > > return false; > > > > > > Note, tcm_vhost_check_endpoint() is called in multiple places. Having a > > > helper is more convenient. > > > > I'm fine with the helper, but fix the implementation > > and fix all callers to have vq mutex when calling. > > All the caller are in the vhost thread. Why need vq mutex to be taken? vhost thread is an implementation detail, people are experimenting with various threading strategies. In particular, code must keep working when we move it to use a workqueue. > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > { > > > > > return 1; > > > > > -- > > > > > 1.8.1.4 > > > > > > -- > > > Asias > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-22 13:28 ` Michael S. Tsirkin @ 2013-04-22 15:00 ` Asias He 2013-04-22 16:21 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-22 15:00 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Mon, Apr 22, 2013 at 04:28:07PM +0300, Michael S. Tsirkin wrote: > On Mon, Apr 22, 2013 at 04:53:27PM +0800, Asias He wrote: > > On Thu, Apr 18, 2013 at 10:38:23AM +0300, Michael S. Tsirkin wrote: > > > On Thu, Apr 18, 2013 at 04:32:30PM +0800, Asias He wrote: > > > > On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > > > > > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > > --- > > > > > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > > > > > 1 file changed, 18 insertions(+) > > > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > > index 88ebb79..8f05528 100644 > > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > > > > return ret; > > > > > > } > > > > > > > > > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > > +{ > > > > > > + bool ret = false; > > > > > > + > > > > > > + /* > > > > > > + * We can handle the vq only after the endpoint is setup by calling the > > > > > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > > > > > + * > > > > > > + * TODO: Check that we are running from vhost_worker which acts > > > > > > + * as read-side critical section for vhost kind of RCU. > > > > > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > > > > > + */ > > > > > > + if (rcu_dereference_check(vq->private_data, 1)) > > > > > > + ret = true; > > > > > > + > > > > > > + return ret; > > > > > > +} > > > > > > + > > > > > > > > > > You can't use RCU in this way. You need to actually > > > > > derefence. Instead, just move this within vq mutex > > > > > and do rcu_dereference_protected. document that this function > > > > > requires vq mutex. > > > > > > > > Any difference with: > > > > > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > > > > > Yes, it's not the same. rcu_dereference_protected says > > > "this is generally RCU but here I use it under lock". > > > rcu_dereference_check can only be used if you > > > really dereference later, and you don't. > > > > So you want the helper to be this? > > > > static bool tcm_vhost_check_endpoint() > > { > > struct tcm_vhost_tpg **vs_tpg; > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > if (vs_tpg) > > return true; > > else > > return false > > } > > > > And what difference code the compiler will generate with this: > > > > static bool tcm_vhost_check_endpoint() > > { > > > > if (rcu_dereference_check(vq->private_data, 1)) > > return true > > else > > return false > > } > > > > No, what I want is that private data is > either dereferenced (not tested against NULL) > or used under vq mutex. > In this case the second option is the easiest > and the cleanest. So move it under mutex > and then test it safely without rcu. Why you use it in -net? The test against NULL is under in the rcu read critical section. What is wrong with this really? > > > > > > > if (vs_tpg) > > > > return ture > > > > else > > > > return false; > > > > > > > > Note, tcm_vhost_check_endpoint() is called in multiple places. Having a > > > > helper is more convenient. > > > > > > I'm fine with the helper, but fix the implementation > > > and fix all callers to have vq mutex when calling. > > > > All the caller are in the vhost thread. Why need vq mutex to be taken? > > vhost thread is an implementation detail, people are experimenting with > various threading strategies. In particular, code must > keep working when we move it to use a workqueue. Well, you should remove your vhost RCU thing first and use real RCU explicitly. Does -net keep working after vhost changes to workqueue? Since when this becomes a requirement? > > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > > { > > > > > > return 1; > > > > > > -- > > > > > > 1.8.1.4 > > > > > > > > -- > > > > Asias > > > > -- > > Asias -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup 2013-04-22 15:00 ` Asias He @ 2013-04-22 16:21 ` Michael S. Tsirkin 0 siblings, 0 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-22 16:21 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Mon, Apr 22, 2013 at 11:00:19PM +0800, Asias He wrote: > On Mon, Apr 22, 2013 at 04:28:07PM +0300, Michael S. Tsirkin wrote: > > On Mon, Apr 22, 2013 at 04:53:27PM +0800, Asias He wrote: > > > On Thu, Apr 18, 2013 at 10:38:23AM +0300, Michael S. Tsirkin wrote: > > > > On Thu, Apr 18, 2013 at 04:32:30PM +0800, Asias He wrote: > > > > > On Thu, Apr 18, 2013 at 10:09:53AM +0300, Michael S. Tsirkin wrote: > > > > > > On Thu, Apr 18, 2013 at 09:05:53AM +0800, Asias He wrote: > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > > > --- > > > > > > > drivers/vhost/tcm_vhost.c | 18 ++++++++++++++++++ > > > > > > > 1 file changed, 18 insertions(+) > > > > > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > > > index 88ebb79..8f05528 100644 > > > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > > > @@ -111,6 +111,24 @@ static bool tcm_vhost_check_feature(struct vhost_scsi *vs, int feature) > > > > > > > return ret; > > > > > > > } > > > > > > > > > > > > > > +static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > > > +{ > > > > > > > + bool ret = false; > > > > > > > + > > > > > > > + /* > > > > > > > + * We can handle the vq only after the endpoint is setup by calling the > > > > > > > + * VHOST_SCSI_SET_ENDPOINT ioctl. > > > > > > > + * > > > > > > > + * TODO: Check that we are running from vhost_worker which acts > > > > > > > + * as read-side critical section for vhost kind of RCU. > > > > > > > + * See the comments in struct vhost_virtqueue in drivers/vhost/vhost.h > > > > > > > + */ > > > > > > > + if (rcu_dereference_check(vq->private_data, 1)) > > > > > > > + ret = true; > > > > > > > + > > > > > > > + return ret; > > > > > > > +} > > > > > > > + > > > > > > > > > > > > You can't use RCU in this way. You need to actually > > > > > > derefence. Instead, just move this within vq mutex > > > > > > and do rcu_dereference_protected. document that this function > > > > > > requires vq mutex. > > > > > > > > > > Any difference with: > > > > > > > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > > > > > > > Yes, it's not the same. rcu_dereference_protected says > > > > "this is generally RCU but here I use it under lock". > > > > rcu_dereference_check can only be used if you > > > > really dereference later, and you don't. > > > > > > So you want the helper to be this? > > > > > > static bool tcm_vhost_check_endpoint() > > > { > > > struct tcm_vhost_tpg **vs_tpg; > > > > > > vs_tpg = rcu_dereference_check(vq->private_data, 1); > > > if (vs_tpg) > > > return true; > > > else > > > return false > > > } > > > > > > And what difference code the compiler will generate with this: > > > > > > static bool tcm_vhost_check_endpoint() > > > { > > > > > > if (rcu_dereference_check(vq->private_data, 1)) > > > return true > > > else > > > return false > > > } > > > > > > > No, what I want is that private data is > > either dereferenced (not tested against NULL) > > or used under vq mutex. > > In this case the second option is the easiest > > and the cleanest. So move it under mutex > > and then test it safely without rcu. > > Why you use it in -net? The test against NULL is under in the rcu > read critical section. What is wrong with this really? Look at the code and see that barriers in rcu_dereference assume that pointer really is dereferenced. You don't dereference it, so rcu_dereference_XXX is wrong. > > > > > > > > > if (vs_tpg) > > > > > return ture > > > > > else > > > > > return false; > > > > > > > > > > Note, tcm_vhost_check_endpoint() is called in multiple places. Having a > > > > > helper is more convenient. > > > > > > > > I'm fine with the helper, but fix the implementation > > > > and fix all callers to have vq mutex when calling. > > > > > > All the caller are in the vhost thread. Why need vq mutex to be taken? > > > > vhost thread is an implementation detail, people are experimenting with > > various threading strategies. In particular, code must > > keep working when we move it to use a workqueue. > > Well, you should remove your vhost RCU thing first and use real RCU > explicitly. I don't think it's useful or possible. > Does -net keep working after vhost changes to workqueue? I think so, yes. > Since when this becomes a requirement? It's the requirement to use the abstractions that are in place. Devices should use the interfaces and not assume vhost implementation, it happens to be the case but it's not guaranteed. > > > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > > > { > > > > > > > return 1; > > > > > > > -- > > > > > > > 1.8.1.4 > > > > > > > > > > -- > > > > > Asias > > > > > > -- > > > Asias > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <1366247154-3136-4-git-send-email-asias@redhat.com>]
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support [not found] ` <1366247154-3136-4-git-send-email-asias@redhat.com> @ 2013-04-18 7:34 ` Michael S. Tsirkin 2013-04-18 8:59 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:34 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > virtio-scsi), hotplug support is added to virtio-scsi. > > This patch adds hotplug and hotunplug support to tcm_vhost. > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > LUN in guest. > > Changes in v7: > - Add vhost_work_flush for vs->vs_event_work to this series > > Changes in v6: > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > Changes in v5: > - Switch to int from u64 to vs_events_nr > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > - Do not nest dev mutex within vq mutex > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > - Rebase to target/master > > Changes in v4: > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > Changes in v3: > - Separate the bug fix to another thread > > Changes in v2: > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > - Fix racing of vs_events_nr > - Add flush fix patch to this series > > Signed-off-by: Asias He <asias@redhat.com> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > drivers/vhost/tcm_vhost.h | 10 +++ > 2 files changed, 214 insertions(+), 4 deletions(-) > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > index 8f05528..da2021b 100644 > --- a/drivers/vhost/tcm_vhost.c > +++ b/drivers/vhost/tcm_vhost.c > @@ -66,11 +66,13 @@ enum { > * TODO: debug and remove the workaround. > */ > enum { > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > }; > > #define VHOST_SCSI_MAX_TARGET 256 > #define VHOST_SCSI_MAX_VQ 128 > +#define VHOST_SCSI_MAX_EVENT 128 > > struct vhost_scsi { > /* Protected by vhost_scsi->dev.mutex */ > @@ -82,6 +84,13 @@ struct vhost_scsi { > > struct vhost_work vs_completion_work; /* cmd completion work item */ > struct llist_head vs_completion_list; /* cmd completion queue */ > + > + struct vhost_work vs_event_work; /* evt injection work item */ > + struct llist_head vs_event_list; /* evt injection queue */ > + > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ Looking at this code, there are just so many locks now. This does not make me feel safe :) At least, please document lock nesting. > + bool vs_events_dropped; /* any missed events */ > + int vs_events_nr; /* num of pending events */ > }; > > /* Local pointer to allocated TCM configfs fabric module */ > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > return ret; > } > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > +{ > + bool ret; > + > + mutex_lock(&vs->vs_events_lock); > + ret = vs->vs_events_dropped; > + mutex_unlock(&vs->vs_events_lock); > + > + return ret; > +} > + > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > { > return 1; > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > return 0; > } > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > +{ > + mutex_lock(&vs->vs_events_lock); > + vs->vs_events_nr--; > + kfree(evt); > + mutex_unlock(&vs->vs_events_lock); > +} > + > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > + u32 event, u32 reason) > +{ > + struct tcm_vhost_evt *evt; > + > + mutex_lock(&vs->vs_events_lock); > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > + vs->vs_events_dropped = true; > + mutex_unlock(&vs->vs_events_lock); > + return NULL; > + } > + > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); I think kzalloc not needed here, you init all fields. Also, this basically means if backend does plug/unplug very quickly, we start consuming kernel memory without a limit. Not good. How about we allocate the event as part of target? There shouldn't ever be more than one hotplug event in flight per target, right? > + if (evt) { Let's do clean error handling here and above: if (!evt) goto err; > + evt->event.event = event; > + evt->event.reason = reason; > + vs->vs_events_nr++; > + } > + mutex_unlock(&vs->vs_events_lock); > + > + return evt; > +} > + > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > { > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > kfree(tv_cmd); > } > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > + struct tcm_vhost_evt *evt) > +{ > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > + struct virtio_scsi_event *event = &evt->event; > + struct virtio_scsi_event __user *eventp; > + unsigned out, in; > + int head, ret; > + > + if (!tcm_vhost_check_endpoint(vq)) > + return; > + > + mutex_lock(&vs->vs_events_lock); > + mutex_lock(&vq->mutex); > +again: > + vhost_disable_notify(&vs->dev, vq); > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > + ARRAY_SIZE(vq->iov), &out, &in, > + NULL, NULL); > + if (head < 0) { > + vs->vs_events_dropped = true; > + goto out; > + } > + if (head == vq->num) { > + if (vhost_enable_notify(&vs->dev, vq)) > + goto again; Please write loops with while() or for(). Not with goto. goto is for error handling. > + vs->vs_events_dropped = true; > + goto out; > + } > + > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > + vq->iov[out].iov_len); > + vs->vs_events_dropped = true; > + goto out; > + } > + > + if (vs->vs_events_dropped) { > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > + vs->vs_events_dropped = false; > + } > + > + eventp = vq->iov[out].iov_base; > + ret = __copy_to_user(eventp, event, sizeof(*event)); > + if (!ret) > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > + else > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > +out: > + mutex_unlock(&vq->mutex); > + mutex_unlock(&vs->vs_events_lock); > +} > + > +static void tcm_vhost_evt_work(struct vhost_work *work) > +{ > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > + vs_event_work); > + struct tcm_vhost_evt *evt; > + struct llist_node *llnode; > + > + llnode = llist_del_all(&vs->vs_event_list); The assumption is that this is slow path thing, no need to worry about speed, yes? so let's do simple list_add, list_del etc. > + while (llnode) { > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > + llnode = llist_next(llnode); > + tcm_vhost_do_evt_work(vs, evt); > + tcm_vhost_free_evt(vs, evt); > + } > +} > + > /* Fill in status and signal that we are done processing this command > * > * This is scheduled in the vhost work queue so we are called with the owner > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > pr_debug("%s: The handling func for control queue.\n", __func__); > } > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > + struct se_lun *lun, u32 event, u32 reason) > +{ > + struct tcm_vhost_evt *evt; > + > + evt = tcm_vhost_allocate_evt(vs, event, reason); > + if (!evt) > + return -ENOMEM; > + > + if (tpg && lun) { > + /* TODO: share lun setup code with virtio-scsi.ko */ > + evt->event.lun[0] = 0x01; > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > + if (lun->unpacked_lun >= 256) > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > + } > + > + llist_add(&evt->list, &vs->vs_event_list); > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > + > + return 0; > +} > + > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > { > - pr_debug("%s: The handling func for event queue.\n", __func__); > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > + poll.work); > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > + > + if (!tcm_vhost_check_endpoint(vq)) > + return; Again just drop this check. > + > + if (tcm_vhost_check_events_dropped(vs)) > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > + > } > > static void vhost_scsi_handle_kick(struct vhost_work *work) > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > vhost_scsi_flush_vq(vs, i); > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > } > > /* > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > return -EEXIST; > } > tv_tpg->tv_tpg_vhost_count++; > + tv_tpg->vhost_scsi = vs; > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > smp_mb__after_atomic_inc(); > match = true; > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > goto err_tpg; > } > tv_tpg->tv_tpg_vhost_count--; > + tv_tpg->vhost_scsi = NULL; > vs->vs_tpg[target] = NULL; > match = true; > mutex_unlock(&tv_tpg->tv_tpg_mutex); > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > return -ENOMEM; > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > + > + s->vs_events_nr = 0; > + s->vs_events_dropped = false; > + mutex_init(&s->vs_events_lock); > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > return "Unknown"; > } > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > + struct se_lun *lun, bool plug) > +{ > + > + struct vhost_scsi *vs = tpg->vhost_scsi; > + u32 reason; > + > + mutex_lock(&tpg->tv_tpg_mutex); > + vs = tpg->vhost_scsi; > + mutex_unlock(&tpg->tv_tpg_mutex); > + if (!vs) > + return -EOPNOTSUPP; Why EOPNOTSUPP? When does this happen? > + > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > + return -EOPNOTSUPP; > + > + if (plug) > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > + else > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > + You have dropped the tpg lock so tpg->vhost_scsi can become NULL now. Why is this safe? > + return tcm_vhost_send_evt(vs, tpg, lun, > + VIRTIO_SCSI_T_TRANSPORT_RESET, > + reason); > +} > + > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > +{ > + return tcm_vhost_do_plug(tpg, lun, true); > +} > + > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > +{ > + return tcm_vhost_do_plug(tpg, lun, false); > +} > + What are these wrappers for? Seem useless ... > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > struct se_lun *lun) > { > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > tv_tpg->tv_tpg_port_count++; > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > + tcm_vhost_hotplug(tv_tpg, lun); > + > return 0; > } > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > - struct se_lun *se_lun) > + struct se_lun *lun) > { > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > struct tcm_vhost_tpg, se_tpg); > - > mutex_lock(&tv_tpg->tv_tpg_mutex); > tv_tpg->tv_tpg_port_count--; > mutex_unlock(&tv_tpg->tv_tpg_mutex); > + > + tcm_vhost_hotunplug(tv_tpg, lun); > } > > static struct se_node_acl *tcm_vhost_make_nodeacl( > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > index 1d2ae7a..94e9ee53 100644 > --- a/drivers/vhost/tcm_vhost.h > +++ b/drivers/vhost/tcm_vhost.h > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > struct se_node_acl se_node_acl; > }; > > +struct vhost_scsi; > struct tcm_vhost_tpg { > /* Vhost port target portal group tag for TCM */ > u16 tport_tpgt; > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > struct tcm_vhost_tport *tport; > /* Returned by tcm_vhost_make_tpg() */ > struct se_portal_group se_tpg; > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > + struct vhost_scsi *vhost_scsi; > }; > > struct tcm_vhost_tport { > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > struct se_wwn tport_wwn; > }; > > +struct tcm_vhost_evt { > + /* virtio_scsi event */ > + struct virtio_scsi_event event; > + /* virtio_scsi event list, serviced from vhost worker thread */ > + struct llist_node list; > +}; > + > /* > * As per request from MST, keep TCM_VHOST related ioctl defines out of > * linux/vhost.h (user-space) for now.. > -- > 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-18 7:34 ` [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support Michael S. Tsirkin @ 2013-04-18 8:59 ` Asias He 2013-04-18 8:21 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-18 8:59 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > LUN in guest. > > > > Changes in v7: > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > Changes in v6: > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > Changes in v5: > > - Switch to int from u64 to vs_events_nr > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > - Do not nest dev mutex within vq mutex > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > - Rebase to target/master > > > > Changes in v4: > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > Changes in v3: > > - Separate the bug fix to another thread > > > > Changes in v2: > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > - Fix racing of vs_events_nr > > - Add flush fix patch to this series > > > > Signed-off-by: Asias He <asias@redhat.com> > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > --- > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > drivers/vhost/tcm_vhost.h | 10 +++ > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > index 8f05528..da2021b 100644 > > --- a/drivers/vhost/tcm_vhost.c > > +++ b/drivers/vhost/tcm_vhost.c > > @@ -66,11 +66,13 @@ enum { > > * TODO: debug and remove the workaround. > > */ > > enum { > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > }; > > > > #define VHOST_SCSI_MAX_TARGET 256 > > #define VHOST_SCSI_MAX_VQ 128 > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > struct vhost_scsi { > > /* Protected by vhost_scsi->dev.mutex */ > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > struct llist_head vs_completion_list; /* cmd completion queue */ > > + > > + struct vhost_work vs_event_work; /* evt injection work item */ > > + struct llist_head vs_event_list; /* evt injection queue */ > > + > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > Looking at this code, there are just so many locks now. > This does not make me feel safe :) > At least, please document lock nesting. > Do you see a real problem? > > + bool vs_events_dropped; /* any missed events */ > > + int vs_events_nr; /* num of pending events */ > > }; > > > > /* Local pointer to allocated TCM configfs fabric module */ > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > return ret; > > } > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > +{ > > + bool ret; > > + > > + mutex_lock(&vs->vs_events_lock); > > + ret = vs->vs_events_dropped; > > + mutex_unlock(&vs->vs_events_lock); > > + > > + return ret; > > +} > > + > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > { > > return 1; > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > return 0; > > } > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > +{ > > + mutex_lock(&vs->vs_events_lock); > > + vs->vs_events_nr--; > > + kfree(evt); > > + mutex_unlock(&vs->vs_events_lock); > > +} > > + > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > + u32 event, u32 reason) > > +{ > > + struct tcm_vhost_evt *evt; > > + > > + mutex_lock(&vs->vs_events_lock); > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > + vs->vs_events_dropped = true; > > + mutex_unlock(&vs->vs_events_lock); > > + return NULL; > > + } > > + > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > I think kzalloc not needed here, you init all fields. Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > Also, this basically means if backend does plug/unplug very quickly, > we start consuming kernel memory without a limit. Not good. > How about we allocate the event as part of target? > There shouldn't ever be more than one hotplug > event in flight per target, right? It is limited by VHOST_SCSI_MAX_EVENT. > > > + if (evt) { > > Let's do clean error handling here and above: > if (!evt) > goto err; We can you do in err? You simply unlock and return. Why bother? How cleaner it will be. > > + evt->event.event = event; > > + evt->event.reason = reason; > > + vs->vs_events_nr++; > > + } > > + mutex_unlock(&vs->vs_events_lock); > > + > > + return evt; > > +} > > + > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > { > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > kfree(tv_cmd); > > } > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > + struct tcm_vhost_evt *evt) > > +{ > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > + struct virtio_scsi_event *event = &evt->event; > > + struct virtio_scsi_event __user *eventp; > > + unsigned out, in; > > + int head, ret; > > + > > + if (!tcm_vhost_check_endpoint(vq)) > > + return; > > + > > + mutex_lock(&vs->vs_events_lock); > > + mutex_lock(&vq->mutex); > > +again: > > + vhost_disable_notify(&vs->dev, vq); > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > + ARRAY_SIZE(vq->iov), &out, &in, > > + NULL, NULL); > > + if (head < 0) { > > + vs->vs_events_dropped = true; > > + goto out; > > + } > > + if (head == vq->num) { > > + if (vhost_enable_notify(&vs->dev, vq)) > > + goto again; > > Please write loops with while() or for(). > Not with goto. goto is for error handling. This makes extra indention which is more ugly. > > + vs->vs_events_dropped = true; > > + goto out; > > + } > > + > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > + vq->iov[out].iov_len); > > + vs->vs_events_dropped = true; > > + goto out; > > + } > > + > > + if (vs->vs_events_dropped) { > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > + vs->vs_events_dropped = false; > > + } > > + > > + eventp = vq->iov[out].iov_base; > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > + if (!ret) > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > + else > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > +out: > > + mutex_unlock(&vq->mutex); > > + mutex_unlock(&vs->vs_events_lock); > > +} > > + > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > +{ > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > + vs_event_work); > > + struct tcm_vhost_evt *evt; > > + struct llist_node *llnode; > > + > > + llnode = llist_del_all(&vs->vs_event_list); > > The assumption is that this is slow path thing, no need to worry about > speed, yes? so let's do simple list_add, list_del etc. Why it is simpler? We are using llist for cmd. Why it is better using one for evt and the different for cmd? Similar code makes people easier to read. > > + while (llnode) { > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > + llnode = llist_next(llnode); > > + tcm_vhost_do_evt_work(vs, evt); > > + tcm_vhost_free_evt(vs, evt); > > + } > > +} > > + > > /* Fill in status and signal that we are done processing this command > > * > > * This is scheduled in the vhost work queue so we are called with the owner > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > pr_debug("%s: The handling func for control queue.\n", __func__); > > } > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > + struct se_lun *lun, u32 event, u32 reason) > > +{ > > + struct tcm_vhost_evt *evt; > > + > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > + if (!evt) > > + return -ENOMEM; > > + > > + if (tpg && lun) { > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > + evt->event.lun[0] = 0x01; > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > + if (lun->unpacked_lun >= 256) > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > + } > > + > > + llist_add(&evt->list, &vs->vs_event_list); > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > + > > + return 0; > > +} > > + > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > { > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > + poll.work); > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > + > > + if (!tcm_vhost_check_endpoint(vq)) > > + return; > > Again just drop this check. Why? > > + > > + if (tcm_vhost_check_events_dropped(vs)) > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > + > > } > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > vhost_scsi_flush_vq(vs, i); > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > } > > > > /* > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > return -EEXIST; > > } > > tv_tpg->tv_tpg_vhost_count++; > > + tv_tpg->vhost_scsi = vs; > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > smp_mb__after_atomic_inc(); > > match = true; > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > goto err_tpg; > > } > > tv_tpg->tv_tpg_vhost_count--; > > + tv_tpg->vhost_scsi = NULL; > > vs->vs_tpg[target] = NULL; > > match = true; > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > return -ENOMEM; > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > + > > + s->vs_events_nr = 0; > > + s->vs_events_dropped = false; > > + mutex_init(&s->vs_events_lock); > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > return "Unknown"; > > } > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > + struct se_lun *lun, bool plug) > > +{ > > + > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > + u32 reason; > > + > > + mutex_lock(&tpg->tv_tpg_mutex); > > + vs = tpg->vhost_scsi; > > + mutex_unlock(&tpg->tv_tpg_mutex); > > + if (!vs) > > + return -EOPNOTSUPP; > > Why EOPNOTSUPP? When does this happen? When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi guest. > > + > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > + return -EOPNOTSUPP; > > + > > + if (plug) > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > + else > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > + > > You have dropped the tpg lock so tpg->vhost_scsi can become > NULL now. Why is this safe? > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > + reason); > > +} > > + > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > +{ > > + return tcm_vhost_do_plug(tpg, lun, true); > > +} > > + > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > +{ > > + return tcm_vhost_do_plug(tpg, lun, false); > > +} > > + > > What are these wrappers for? Seem useless ... It make people easier to understand what's the true and false is about in tcm_vhost_do_plug. > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > struct se_lun *lun) > > { > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > tv_tpg->tv_tpg_port_count++; > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > + > > return 0; > > } > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > - struct se_lun *se_lun) > > + struct se_lun *lun) > > { > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > struct tcm_vhost_tpg, se_tpg); > > - > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > tv_tpg->tv_tpg_port_count--; > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > + > > + tcm_vhost_hotunplug(tv_tpg, lun); > > } > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > index 1d2ae7a..94e9ee53 100644 > > --- a/drivers/vhost/tcm_vhost.h > > +++ b/drivers/vhost/tcm_vhost.h > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > struct se_node_acl se_node_acl; > > }; > > > > +struct vhost_scsi; > > struct tcm_vhost_tpg { > > /* Vhost port target portal group tag for TCM */ > > u16 tport_tpgt; > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > struct tcm_vhost_tport *tport; > > /* Returned by tcm_vhost_make_tpg() */ > > struct se_portal_group se_tpg; > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > + struct vhost_scsi *vhost_scsi; > > }; > > > > struct tcm_vhost_tport { > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > struct se_wwn tport_wwn; > > }; > > > > +struct tcm_vhost_evt { > > + /* virtio_scsi event */ > > + struct virtio_scsi_event event; > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > + struct llist_node list; > > +}; > > + > > /* > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > * linux/vhost.h (user-space) for now.. > > -- > > 1.8.1.4 -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-18 8:59 ` Asias He @ 2013-04-18 8:21 ` Michael S. Tsirkin 2013-04-19 2:34 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 8:21 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > LUN in guest. > > > > > > Changes in v7: > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > Changes in v6: > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > Changes in v5: > > > - Switch to int from u64 to vs_events_nr > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > - Do not nest dev mutex within vq mutex > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > - Rebase to target/master > > > > > > Changes in v4: > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > Changes in v3: > > > - Separate the bug fix to another thread > > > > > > Changes in v2: > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > - Fix racing of vs_events_nr > > > - Add flush fix patch to this series > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > --- > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > index 8f05528..da2021b 100644 > > > --- a/drivers/vhost/tcm_vhost.c > > > +++ b/drivers/vhost/tcm_vhost.c > > > @@ -66,11 +66,13 @@ enum { > > > * TODO: debug and remove the workaround. > > > */ > > > enum { > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > }; > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > #define VHOST_SCSI_MAX_VQ 128 > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > struct vhost_scsi { > > > /* Protected by vhost_scsi->dev.mutex */ > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > + > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > + > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > Looking at this code, there are just so many locks now. > > This does not make me feel safe :) > > At least, please document lock nesting. > > > > Do you see a real problem? Complexity is a real problem. My head already spins. No I don't see a bug, but we need to simplify locking. And I think I see a nice way to do this: 1. move away from a global work to per-event work - so no list 2. remove dynamic allocation of events - so no events_nr 3. set/test overrun flag under the appropriate vq mutex I think that's ideal. But we can move there in small steps. As a first step - why can't we always take the vq mutex lock and drop vs_events_lock? > > > + bool vs_events_dropped; /* any missed events */ > > > + int vs_events_nr; /* num of pending events */ > > > }; > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > return ret; > > > } > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > +{ > > > + bool ret; > > > + > > > + mutex_lock(&vs->vs_events_lock); > > > + ret = vs->vs_events_dropped; > > > + mutex_unlock(&vs->vs_events_lock); > > > + > > > + return ret; > > > +} > > > + > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > { > > > return 1; > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > return 0; > > > } > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > +{ > > > + mutex_lock(&vs->vs_events_lock); > > > + vs->vs_events_nr--; > > > + kfree(evt); > > > + mutex_unlock(&vs->vs_events_lock); > > > +} > > > + > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > + u32 event, u32 reason) > > > +{ > > > + struct tcm_vhost_evt *evt; > > > + > > > + mutex_lock(&vs->vs_events_lock); > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > + vs->vs_events_dropped = true; > > > + mutex_unlock(&vs->vs_events_lock); > > > + return NULL; > > > + } > > > + > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > I think kzalloc not needed here, you init all fields. > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. So that is 4 bytes just init them when you set rest of lun. > > Also, this basically means if backend does plug/unplug very quickly, > > we start consuming kernel memory without a limit. Not good. > > How about we allocate the event as part of target? > > There shouldn't ever be more than one hotplug > > event in flight per target, right? > > It is limited by VHOST_SCSI_MAX_EVENT. OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the way? I don't see it in spec or did I miss it? It seems it's best not to lost events as long as there are descriptors in the event vq. > > > > > + if (evt) { > > > > Let's do clean error handling here and above: > > if (!evt) > > goto err; > > We can you do in err? You simply unlock and return. Why bother? How > cleaner it will be. There's another error above and we'll share code. It's not a big issue, just a bit nicer IMHO. > > > + evt->event.event = event; > > > + evt->event.reason = reason; > > > + vs->vs_events_nr++; > > > + } > > > + mutex_unlock(&vs->vs_events_lock); > > > + > > > + return evt; > > > +} > > > + > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > { > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > kfree(tv_cmd); > > > } > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > + struct tcm_vhost_evt *evt) > > > +{ > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > + struct virtio_scsi_event *event = &evt->event; > > > + struct virtio_scsi_event __user *eventp; > > > + unsigned out, in; > > > + int head, ret; > > > + > > > + if (!tcm_vhost_check_endpoint(vq)) > > > + return; > > > + > > > + mutex_lock(&vs->vs_events_lock); > > > + mutex_lock(&vq->mutex); > > > +again: > > > + vhost_disable_notify(&vs->dev, vq); > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > + NULL, NULL); > > > + if (head < 0) { > > > + vs->vs_events_dropped = true; > > > + goto out; > > > + } > > > + if (head == vq->num) { > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > + goto again; > > > > Please write loops with while() or for(). > > Not with goto. goto is for error handling. > > This makes extra indention which is more ugly. I don't care. No loops with goto and that's a hard rule. > > > + vs->vs_events_dropped = true; > > > + goto out; > > > + } > > > + > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > + vq->iov[out].iov_len); > > > + vs->vs_events_dropped = true; > > > + goto out; > > > + } > > > + > > > + if (vs->vs_events_dropped) { > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > + vs->vs_events_dropped = false; > > > + } > > > + > > > + eventp = vq->iov[out].iov_base; > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > + if (!ret) > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > + else > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > +out: > > > + mutex_unlock(&vq->mutex); > > > + mutex_unlock(&vs->vs_events_lock); > > > +} > > > + > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > +{ > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > + vs_event_work); > > > + struct tcm_vhost_evt *evt; > > > + struct llist_node *llnode; > > > + > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > The assumption is that this is slow path thing, no need to worry about > > speed, yes? so let's do simple list_add, list_del etc. > > Why it is simpler? simple list_ with a lock is easier to use correctly. > We are using llist for cmd. Why it is better using > one for evt and the different for cmd? Similar code makes people easier > to read. OK fair enough. But my idea is above to use a work structure instead of a list of events. This way we don't need extra list at all, no new locks, nothing. > > > + while (llnode) { > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > + llnode = llist_next(llnode); > > > + tcm_vhost_do_evt_work(vs, evt); > > > + tcm_vhost_free_evt(vs, evt); > > > + } > > > +} > > > + > > > /* Fill in status and signal that we are done processing this command > > > * > > > * This is scheduled in the vhost work queue so we are called with the owner > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > } > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > + struct se_lun *lun, u32 event, u32 reason) > > > +{ > > > + struct tcm_vhost_evt *evt; > > > + > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > + if (!evt) > > > + return -ENOMEM; > > > + > > > + if (tpg && lun) { > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > + evt->event.lun[0] = 0x01; > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > + if (lun->unpacked_lun >= 256) > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > + } > > > + > > > + llist_add(&evt->list, &vs->vs_event_list); > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > + > > > + return 0; > > > +} > > > + > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > { > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > + poll.work); > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > + > > > + if (!tcm_vhost_check_endpoint(vq)) > > > + return; > > > > Again just drop this check. > > Why? Same as above. Because it's not safe to do this outside the vq mutex. > > > + > > > + if (tcm_vhost_check_events_dropped(vs)) > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > + > > > } > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > vhost_scsi_flush_vq(vs, i); > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > } > > > > > > /* > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > return -EEXIST; > > > } > > > tv_tpg->tv_tpg_vhost_count++; > > > + tv_tpg->vhost_scsi = vs; > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > smp_mb__after_atomic_inc(); > > > match = true; > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > goto err_tpg; > > > } > > > tv_tpg->tv_tpg_vhost_count--; > > > + tv_tpg->vhost_scsi = NULL; > > > vs->vs_tpg[target] = NULL; > > > match = true; > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > return -ENOMEM; > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > + > > > + s->vs_events_nr = 0; > > > + s->vs_events_dropped = false; > > > + mutex_init(&s->vs_events_lock); > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > return "Unknown"; > > > } > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > + struct se_lun *lun, bool plug) > > > +{ > > > + > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > + u32 reason; > > > + > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > + vs = tpg->vhost_scsi; > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > + if (!vs) > > > + return -EOPNOTSUPP; > > > > Why EOPNOTSUPP? When does this happen? > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > guest. So ENODEV or something. > > > + > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > + return -EOPNOTSUPP; > > > + > > > + if (plug) > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > + else > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > + > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > NULL now. Why is this safe? > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > + reason); > > > +} > > > + > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > +{ > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > +} > > > + > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > +{ > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > +} > > > + > > > > What are these wrappers for? Seem useless ... > > > It make people easier to understand what's the true and false is about > in tcm_vhost_do_plug. So just pass in the reason and not a bool. When readers see VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED they will know what is going on. > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > struct se_lun *lun) > > > { > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > tv_tpg->tv_tpg_port_count++; > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > + > > > return 0; > > > } > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > - struct se_lun *se_lun) > > > + struct se_lun *lun) > > > { > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > struct tcm_vhost_tpg, se_tpg); > > > - > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > tv_tpg->tv_tpg_port_count--; > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > + > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > } > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > index 1d2ae7a..94e9ee53 100644 > > > --- a/drivers/vhost/tcm_vhost.h > > > +++ b/drivers/vhost/tcm_vhost.h > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > struct se_node_acl se_node_acl; > > > }; > > > > > > +struct vhost_scsi; > > > struct tcm_vhost_tpg { > > > /* Vhost port target portal group tag for TCM */ > > > u16 tport_tpgt; > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > struct tcm_vhost_tport *tport; > > > /* Returned by tcm_vhost_make_tpg() */ > > > struct se_portal_group se_tpg; > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > + struct vhost_scsi *vhost_scsi; > > > }; > > > > > > struct tcm_vhost_tport { > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > struct se_wwn tport_wwn; > > > }; > > > > > > +struct tcm_vhost_evt { > > > + /* virtio_scsi event */ > > > + struct virtio_scsi_event event; > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > + struct llist_node list; > > > +}; > > > + > > > /* > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > * linux/vhost.h (user-space) for now.. > > > -- > > > 1.8.1.4 > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-18 8:21 ` Michael S. Tsirkin @ 2013-04-19 2:34 ` Asias He 2013-04-20 19:01 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-19 2:34 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 11:21:54AM +0300, Michael S. Tsirkin wrote: > On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > > LUN in guest. > > > > > > > > Changes in v7: > > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > > > Changes in v6: > > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > > > Changes in v5: > > > > - Switch to int from u64 to vs_events_nr > > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > > - Do not nest dev mutex within vq mutex > > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > > - Rebase to target/master > > > > > > > > Changes in v4: > > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > > > Changes in v3: > > > > - Separate the bug fix to another thread > > > > > > > > Changes in v2: > > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > > - Fix racing of vs_events_nr > > > > - Add flush fix patch to this series > > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > --- > > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > index 8f05528..da2021b 100644 > > > > --- a/drivers/vhost/tcm_vhost.c > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > @@ -66,11 +66,13 @@ enum { > > > > * TODO: debug and remove the workaround. > > > > */ > > > > enum { > > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > > }; > > > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > > #define VHOST_SCSI_MAX_VQ 128 > > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > > > struct vhost_scsi { > > > > /* Protected by vhost_scsi->dev.mutex */ > > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > > + > > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > > + > > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > > > Looking at this code, there are just so many locks now. > > > This does not make me feel safe :) > > > At least, please document lock nesting. > > > > > > > Do you see a real problem? > > Complexity is a real problem. My head already spins. No I don't see a > bug, but we need to simplify locking. > > And I think I see a nice way to do this: > 1. move away from a global work to per-event work - so no list > 2. remove dynamic allocation of events - so no events_nr > 3. set/test overrun flag under the appropriate vq mutex > > I think that's ideal. But we can move there in small steps. As a first > step - why can't we always take the vq mutex lock and drop > vs_events_lock? There are really different ways to solve the same problem. You are welcome to implement you ideas. > > > > + bool vs_events_dropped; /* any missed events */ > > > > + int vs_events_nr; /* num of pending events */ > > > > }; > > > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > return ret; > > > > } > > > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > > +{ > > > > + bool ret; > > > > + > > > > + mutex_lock(&vs->vs_events_lock); > > > > + ret = vs->vs_events_dropped; > > > > + mutex_unlock(&vs->vs_events_lock); > > > > + > > > > + return ret; > > > > +} > > > > + > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > { > > > > return 1; > > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > > return 0; > > > > } > > > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > > +{ > > > > + mutex_lock(&vs->vs_events_lock); > > > > + vs->vs_events_nr--; > > > > + kfree(evt); > > > > + mutex_unlock(&vs->vs_events_lock); > > > > +} > > > > + > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > > + u32 event, u32 reason) > > > > +{ > > > > + struct tcm_vhost_evt *evt; > > > > + > > > > + mutex_lock(&vs->vs_events_lock); > > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > > + vs->vs_events_dropped = true; > > > > + mutex_unlock(&vs->vs_events_lock); > > > > + return NULL; > > > > + } > > > > + > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > > > I think kzalloc not needed here, you init all fields. > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > > So that is 4 bytes just init them when you set rest of lun. It is not in the fast path. You can do it this way but not a must. > > > Also, this basically means if backend does plug/unplug very quickly, > > > we start consuming kernel memory without a limit. Not good. > > > How about we allocate the event as part of target? > > > There shouldn't ever be more than one hotplug > > > event in flight per target, right? > > > > It is limited by VHOST_SCSI_MAX_EVENT. > > OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the > way? I don't see it in spec or did I miss it? It seems it's best not > to lost events as long as there are descriptors in the event vq. You wanted this. > > > > > > > + if (evt) { > > > > > > Let's do clean error handling here and above: > > > if (!evt) > > > goto err; > > > > We can you do in err? You simply unlock and return. Why bother? How > > cleaner it will be. > > There's another error above and we'll share code. > It's not a big issue, just a bit nicer IMHO. Can't we focus on real issues other than this? > > > > + evt->event.event = event; > > > > + evt->event.reason = reason; > > > > + vs->vs_events_nr++; > > > > + } > > > > + mutex_unlock(&vs->vs_events_lock); > > > > + > > > > + return evt; > > > > +} > > > > + > > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > { > > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > kfree(tv_cmd); > > > > } > > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > > + struct tcm_vhost_evt *evt) > > > > +{ > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > > + struct virtio_scsi_event *event = &evt->event; > > > > + struct virtio_scsi_event __user *eventp; > > > > + unsigned out, in; > > > > + int head, ret; > > > > + > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > + return; > > > > + > > > > + mutex_lock(&vs->vs_events_lock); > > > > + mutex_lock(&vq->mutex); > > > > +again: > > > > + vhost_disable_notify(&vs->dev, vq); > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > > + NULL, NULL); > > > > + if (head < 0) { > > > > + vs->vs_events_dropped = true; > > > > + goto out; > > > > + } > > > > + if (head == vq->num) { > > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > > + goto again; > > > > > > Please write loops with while() or for(). > > > Not with goto. goto is for error handling. > > > > This makes extra indention which is more ugly. > > I don't care. No loops with goto and that's a hard rule. It is not a loop. > > > > + vs->vs_events_dropped = true; > > > > + goto out; > > > > + } > > > > + > > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > > + vq->iov[out].iov_len); > > > > + vs->vs_events_dropped = true; > > > > + goto out; > > > > + } > > > > + > > > > + if (vs->vs_events_dropped) { > > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > > + vs->vs_events_dropped = false; > > > > + } > > > > + > > > > + eventp = vq->iov[out].iov_base; > > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > > + if (!ret) > > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > > + else > > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > > +out: > > > > + mutex_unlock(&vq->mutex); > > > > + mutex_unlock(&vs->vs_events_lock); > > > > +} > > > > + > > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > > +{ > > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > > + vs_event_work); > > > > + struct tcm_vhost_evt *evt; > > > > + struct llist_node *llnode; > > > > + > > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > > > The assumption is that this is slow path thing, no need to worry about > > > speed, yes? so let's do simple list_add, list_del etc. > > > > Why it is simpler? > > simple list_ with a lock is easier to use correctly. You need to use it correctly anyway for cmd. Why you want a lock here? > > We are using llist for cmd. Why it is better using > > one for evt and the different for cmd? Similar code makes people easier > > to read. > > OK fair enough. But my idea is above to use a work > structure instead of a list of events. > This way we don't need extra list at all, > no new locks, nothing. Again, You are welcome to implement your ideas. > > > > + while (llnode) { > > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > > + llnode = llist_next(llnode); > > > > + tcm_vhost_do_evt_work(vs, evt); > > > > + tcm_vhost_free_evt(vs, evt); > > > > + } > > > > +} > > > > + > > > > /* Fill in status and signal that we are done processing this command > > > > * > > > > * This is scheduled in the vhost work queue so we are called with the owner > > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > > } > > > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > > + struct se_lun *lun, u32 event, u32 reason) > > > > +{ > > > > + struct tcm_vhost_evt *evt; > > > > + > > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > > + if (!evt) > > > > + return -ENOMEM; > > > > + > > > > + if (tpg && lun) { > > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > > + evt->event.lun[0] = 0x01; > > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > > + if (lun->unpacked_lun >= 256) > > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > > + } > > > > + > > > > + llist_add(&evt->list, &vs->vs_event_list); > > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > > + > > > > + return 0; > > > > +} > > > > + > > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > > { > > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > > + poll.work); > > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > > + > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > + return; > > > > > > Again just drop this check. > > > > Why? > > Same as above. > Because it's not safe to do this outside the vq mutex. Is it also not safe to the same in -net? Why is it not safe? > > > > + > > > > + if (tcm_vhost_check_events_dropped(vs)) > > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > > + > > > > } > > > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > > vhost_scsi_flush_vq(vs, i); > > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > > } > > > > > > > > /* > > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > > return -EEXIST; > > > > } > > > > tv_tpg->tv_tpg_vhost_count++; > > > > + tv_tpg->vhost_scsi = vs; > > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > > smp_mb__after_atomic_inc(); > > > > match = true; > > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > > goto err_tpg; > > > > } > > > > tv_tpg->tv_tpg_vhost_count--; > > > > + tv_tpg->vhost_scsi = NULL; > > > > vs->vs_tpg[target] = NULL; > > > > match = true; > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > > return -ENOMEM; > > > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > > + > > > > + s->vs_events_nr = 0; > > > > + s->vs_events_dropped = false; > > > > + mutex_init(&s->vs_events_lock); > > > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > > return "Unknown"; > > > > } > > > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > > + struct se_lun *lun, bool plug) > > > > +{ > > > > + > > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > > + u32 reason; > > > > + > > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > > + vs = tpg->vhost_scsi; > > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > > + if (!vs) > > > > + return -EOPNOTSUPP; > > > > > > Why EOPNOTSUPP? When does this happen? > > > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > > guest. > > So ENODEV or something. okay. > > > > + > > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > > + return -EOPNOTSUPP; > > > > + > > > > + if (plug) > > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > > + else > > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > > + > > > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > > NULL now. Why is this safe? > > > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > > + reason); > > > > +} > > > > + > > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > +{ > > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > > +} > > > > + > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > +{ > > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > > +} > > > > + > > > > > > What are these wrappers for? Seem useless ... > > > > > > It make people easier to understand what's the true and false is about > > in tcm_vhost_do_plug. > > So just pass in the reason and not a bool. > When readers see > VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED > they will know what is going on. How can you make the connection that VIRTIO_SCSI_EVT_RESET_RESCAN is hotplug and VIRTIO_SCSI_EVT_RESET_REMOVED is hot-unplug at first glance? > > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > struct se_lun *lun) > > > > { > > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > tv_tpg->tv_tpg_port_count++; > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > > + > > > > return 0; > > > > } > > > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > > - struct se_lun *se_lun) > > > > + struct se_lun *lun) > > > > { > > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > > struct tcm_vhost_tpg, se_tpg); > > > > - > > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > > tv_tpg->tv_tpg_port_count--; > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > + > > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > > } > > > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > > index 1d2ae7a..94e9ee53 100644 > > > > --- a/drivers/vhost/tcm_vhost.h > > > > +++ b/drivers/vhost/tcm_vhost.h > > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > > struct se_node_acl se_node_acl; > > > > }; > > > > > > > > +struct vhost_scsi; > > > > struct tcm_vhost_tpg { > > > > /* Vhost port target portal group tag for TCM */ > > > > u16 tport_tpgt; > > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > > struct tcm_vhost_tport *tport; > > > > /* Returned by tcm_vhost_make_tpg() */ > > > > struct se_portal_group se_tpg; > > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > > + struct vhost_scsi *vhost_scsi; > > > > }; > > > > > > > > struct tcm_vhost_tport { > > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > > struct se_wwn tport_wwn; > > > > }; > > > > > > > > +struct tcm_vhost_evt { > > > > + /* virtio_scsi event */ > > > > + struct virtio_scsi_event event; > > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > > + struct llist_node list; > > > > +}; > > > > + > > > > /* > > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > > * linux/vhost.h (user-space) for now.. > > > > -- > > > > 1.8.1.4 > > > > -- > > Asias -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-19 2:34 ` Asias He @ 2013-04-20 19:01 ` Michael S. Tsirkin 2013-04-22 9:20 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-20 19:01 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Fri, Apr 19, 2013 at 10:34:10AM +0800, Asias He wrote: > On Thu, Apr 18, 2013 at 11:21:54AM +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > > > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > > > LUN in guest. > > > > > > > > > > Changes in v7: > > > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > > > > > Changes in v6: > > > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > > > > > Changes in v5: > > > > > - Switch to int from u64 to vs_events_nr > > > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > > > - Do not nest dev mutex within vq mutex > > > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > > > - Rebase to target/master > > > > > > > > > > Changes in v4: > > > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > > > > > Changes in v3: > > > > > - Separate the bug fix to another thread > > > > > > > > > > Changes in v2: > > > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > > > - Fix racing of vs_events_nr > > > > > - Add flush fix patch to this series > > > > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > --- > > > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > index 8f05528..da2021b 100644 > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > @@ -66,11 +66,13 @@ enum { > > > > > * TODO: debug and remove the workaround. > > > > > */ > > > > > enum { > > > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > > > }; > > > > > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > > > #define VHOST_SCSI_MAX_VQ 128 > > > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > > > > > struct vhost_scsi { > > > > > /* Protected by vhost_scsi->dev.mutex */ > > > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > > > + > > > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > > > + > > > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > > > > > Looking at this code, there are just so many locks now. > > > > This does not make me feel safe :) > > > > At least, please document lock nesting. > > > > > > > > > > Do you see a real problem? > > > > Complexity is a real problem. My head already spins. No I don't see a > > bug, but we need to simplify locking. > > > > And I think I see a nice way to do this: > > 1. move away from a global work to per-event work - so no list > > 2. remove dynamic allocation of events - so no events_nr > > 3. set/test overrun flag under the appropriate vq mutex > > > > I think that's ideal. But we can move there in small steps. As a first > > step - why can't we always take the vq mutex lock and drop > > vs_events_lock? > > There are really different ways to solve the same problem. You are > welcome to implement you ideas. I have a problem with this patch. It just seems too tricky, while it is really trying to do a very simple thing. There's new lock nesting. There are new places which do lock, read data, unlock, use data. There's new RCU. All this feels fragile, and will be hard to maintain even though I don't see bugs. So I would prefer it if you tried to simplify this code making it easier to maintain. I sent some ideas on how to do this, but if you like, do it some other way. I also made some comments about the coding style but these are less important (except goto again I guess) if these will be te only issues, I'd say we can apply. > > > > > + bool vs_events_dropped; /* any missed events */ > > > > > + int vs_events_nr; /* num of pending events */ > > > > > }; > > > > > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > return ret; > > > > > } > > > > > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > > > +{ > > > > > + bool ret; > > > > > + > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > + ret = vs->vs_events_dropped; > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > + > > > > > + return ret; > > > > > +} > > > > > + > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > { > > > > > return 1; > > > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > > > return 0; > > > > > } > > > > > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > > > +{ > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > + vs->vs_events_nr--; > > > > > + kfree(evt); > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > +} > > > > > + > > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > > > + u32 event, u32 reason) > > > > > +{ > > > > > + struct tcm_vhost_evt *evt; > > > > > + > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > > > + vs->vs_events_dropped = true; > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > + return NULL; > > > > > + } > > > > > + > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > > > > > I think kzalloc not needed here, you init all fields. > > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > > > > So that is 4 bytes just init them when you set rest of lun. > > It is not in the fast path. You can do it this way but not a must. I think that's a bit cleaner than relying on kzalloc to zero-initialize. > > > > Also, this basically means if backend does plug/unplug very quickly, > > > > we start consuming kernel memory without a limit. Not good. > > > > How about we allocate the event as part of target? > > > > There shouldn't ever be more than one hotplug > > > > event in flight per target, right? > > > > > > It is limited by VHOST_SCSI_MAX_EVENT. > > > > OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the > > way? I don't see it in spec or did I miss it? It seems it's best not > > to lost events as long as there are descriptors in the event vq. > > You wanted this. Yes I wanted some limit on the amount of memory we use up :) > > > > > > > > > + if (evt) { > > > > > > > > Let's do clean error handling here and above: > > > > if (!evt) > > > > goto err; > > > > > > We can you do in err? You simply unlock and return. Why bother? How > > > cleaner it will be. > > > > There's another error above and we'll share code. > > It's not a big issue, just a bit nicer IMHO. > > Can't we focus on real issues other than this? Yes style issues are not the most important here. > > > > > + evt->event.event = event; > > > > > + evt->event.reason = reason; > > > > > + vs->vs_events_nr++; > > > > > + } > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > + > > > > > + return evt; > > > > > +} > > > > > + > > > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > { > > > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > kfree(tv_cmd); > > > > > } > > > > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > > > + struct tcm_vhost_evt *evt) > > > > > +{ > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > > > + struct virtio_scsi_event *event = &evt->event; > > > > > + struct virtio_scsi_event __user *eventp; > > > > > + unsigned out, in; > > > > > + int head, ret; > > > > > + > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > + return; > > > > > + > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > + mutex_lock(&vq->mutex); > > > > > +again: > > > > > + vhost_disable_notify(&vs->dev, vq); > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > > > + NULL, NULL); > > > > > + if (head < 0) { > > > > > + vs->vs_events_dropped = true; > > > > > + goto out; > > > > > + } > > > > > + if (head == vq->num) { > > > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > > > + goto again; > > > > > > > > Please write loops with while() or for(). > > > > Not with goto. goto is for error handling. > > > > > > This makes extra indention which is more ugly. > > > > I don't care. No loops with goto and that's a hard rule. > > It is not a loop. If same code repeats, it's a loop. > > > > > + vs->vs_events_dropped = true; > > > > > + goto out; > > > > > + } > > > > > + > > > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > > > + vq->iov[out].iov_len); > > > > > + vs->vs_events_dropped = true; > > > > > + goto out; > > > > > + } > > > > > + > > > > > + if (vs->vs_events_dropped) { > > > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > > > + vs->vs_events_dropped = false; > > > > > + } > > > > > + > > > > > + eventp = vq->iov[out].iov_base; > > > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > > > + if (!ret) > > > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > > > + else > > > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > > > +out: > > > > > + mutex_unlock(&vq->mutex); > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > +} > > > > > + > > > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > > > +{ > > > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > > > + vs_event_work); > > > > > + struct tcm_vhost_evt *evt; > > > > > + struct llist_node *llnode; > > > > > + > > > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > > > > > The assumption is that this is slow path thing, no need to worry about > > > > speed, yes? so let's do simple list_add, list_del etc. > > > > > > Why it is simpler? > > > > simple list_ with a lock is easier to use correctly. > > You need to use it correctly anyway for cmd. Why you want a lock here? > > > > We are using llist for cmd. Why it is better using > > > one for evt and the different for cmd? Similar code makes people easier > > > to read. > > > > OK fair enough. But my idea is above to use a work > > structure instead of a list of events. > > This way we don't need extra list at all, > > no new locks, nothing. > > Again, You are welcome to implement your ideas. I might but I'm otherwise occupied at the moment. > > > > > + while (llnode) { > > > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > > > + llnode = llist_next(llnode); > > > > > + tcm_vhost_do_evt_work(vs, evt); > > > > > + tcm_vhost_free_evt(vs, evt); > > > > > + } > > > > > +} > > > > > + > > > > > /* Fill in status and signal that we are done processing this command > > > > > * > > > > > * This is scheduled in the vhost work queue so we are called with the owner > > > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > > > } > > > > > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > > > + struct se_lun *lun, u32 event, u32 reason) > > > > > +{ > > > > > + struct tcm_vhost_evt *evt; > > > > > + > > > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > > > + if (!evt) > > > > > + return -ENOMEM; > > > > > + > > > > > + if (tpg && lun) { > > > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > > > + evt->event.lun[0] = 0x01; > > > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > > > + if (lun->unpacked_lun >= 256) > > > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > > > + } > > > > > + > > > > > + llist_add(&evt->list, &vs->vs_event_list); > > > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > > > + > > > > > + return 0; > > > > > +} > > > > > + > > > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > > > { > > > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > > > + poll.work); > > > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > > > + > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > + return; > > > > > > > > Again just drop this check. > > > > > > Why? > > > > Same as above. > > Because it's not safe to do this outside the vq mutex. > > Is it also not safe to the same in -net? It's safe there since we dereference the pointer. > Why is it not safe? Because you don't dereference the pointer. > > > > > + > > > > > + if (tcm_vhost_check_events_dropped(vs)) > > > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > > > + > > > > > } > > > > > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > > > vhost_scsi_flush_vq(vs, i); > > > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > > > } > > > > > > > > > > /* > > > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > > > return -EEXIST; > > > > > } > > > > > tv_tpg->tv_tpg_vhost_count++; > > > > > + tv_tpg->vhost_scsi = vs; > > > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > > > smp_mb__after_atomic_inc(); > > > > > match = true; > > > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > > > goto err_tpg; > > > > > } > > > > > tv_tpg->tv_tpg_vhost_count--; > > > > > + tv_tpg->vhost_scsi = NULL; > > > > > vs->vs_tpg[target] = NULL; > > > > > match = true; > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > > > return -ENOMEM; > > > > > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > > > + > > > > > + s->vs_events_nr = 0; > > > > > + s->vs_events_dropped = false; > > > > > + mutex_init(&s->vs_events_lock); > > > > > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > > > return "Unknown"; > > > > > } > > > > > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > > > + struct se_lun *lun, bool plug) > > > > > +{ > > > > > + > > > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > > > + u32 reason; > > > > > + > > > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > > > + vs = tpg->vhost_scsi; > > > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > > > + if (!vs) > > > > > + return -EOPNOTSUPP; > > > > > > > > Why EOPNOTSUPP? When does this happen? > > > > > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > > > guest. > > > > So ENODEV or something. > > okay. > > > > > > + > > > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > > > + return -EOPNOTSUPP; > > > > > + > > > > > + if (plug) > > > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > > > + else > > > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > > > + > > > > > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > > > NULL now. Why is this safe? > > > > > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > > > + reason); > > > > > +} > > > > > + > > > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > +{ > > > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > > > +} > > > > > + > > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > +{ > > > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > > > +} > > > > > + > > > > > > > > What are these wrappers for? Seem useless ... > > > > > > > > > It make people easier to understand what's the true and false is about > > > in tcm_vhost_do_plug. > > > > So just pass in the reason and not a bool. > > When readers see > > VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED > > they will know what is going on. > > How can you make the connection that VIRTIO_SCSI_EVT_RESET_RESCAN is > hotplug and VIRTIO_SCSI_EVT_RESET_REMOVED is hot-unplug at first glance? > > > > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > struct se_lun *lun) > > > > > { > > > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > tv_tpg->tv_tpg_port_count++; > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > > > + > > > > > return 0; > > > > > } > > > > > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > > > - struct se_lun *se_lun) > > > > > + struct se_lun *lun) > > > > > { > > > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > > > struct tcm_vhost_tpg, se_tpg); > > > > > - > > > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > > > tv_tpg->tv_tpg_port_count--; > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > + > > > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > > > } > > > > > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > > > index 1d2ae7a..94e9ee53 100644 > > > > > --- a/drivers/vhost/tcm_vhost.h > > > > > +++ b/drivers/vhost/tcm_vhost.h > > > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > > > struct se_node_acl se_node_acl; > > > > > }; > > > > > > > > > > +struct vhost_scsi; > > > > > struct tcm_vhost_tpg { > > > > > /* Vhost port target portal group tag for TCM */ > > > > > u16 tport_tpgt; > > > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > > > struct tcm_vhost_tport *tport; > > > > > /* Returned by tcm_vhost_make_tpg() */ > > > > > struct se_portal_group se_tpg; > > > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > > > + struct vhost_scsi *vhost_scsi; > > > > > }; > > > > > > > > > > struct tcm_vhost_tport { > > > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > > > struct se_wwn tport_wwn; > > > > > }; > > > > > > > > > > +struct tcm_vhost_evt { > > > > > + /* virtio_scsi event */ > > > > > + struct virtio_scsi_event event; > > > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > > > + struct llist_node list; > > > > > +}; > > > > > + > > > > > /* > > > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > > > * linux/vhost.h (user-space) for now.. > > > > > -- > > > > > 1.8.1.4 > > > > > > -- > > > Asias > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-20 19:01 ` Michael S. Tsirkin @ 2013-04-22 9:20 ` Asias He 2013-04-22 13:17 ` Michael S. Tsirkin 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-22 9:20 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Sat, Apr 20, 2013 at 10:01:31PM +0300, Michael S. Tsirkin wrote: > On Fri, Apr 19, 2013 at 10:34:10AM +0800, Asias He wrote: > > On Thu, Apr 18, 2013 at 11:21:54AM +0300, Michael S. Tsirkin wrote: > > > On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > > > > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > > > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > > > > LUN in guest. > > > > > > > > > > > > Changes in v7: > > > > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > > > > > > > Changes in v6: > > > > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > > > > > > > Changes in v5: > > > > > > - Switch to int from u64 to vs_events_nr > > > > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > > > > - Do not nest dev mutex within vq mutex > > > > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > > > > - Rebase to target/master > > > > > > > > > > > > Changes in v4: > > > > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > > > > > > > Changes in v3: > > > > > > - Separate the bug fix to another thread > > > > > > > > > > > > Changes in v2: > > > > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > > > > - Fix racing of vs_events_nr > > > > > > - Add flush fix patch to this series > > > > > > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > > --- > > > > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > > index 8f05528..da2021b 100644 > > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > > @@ -66,11 +66,13 @@ enum { > > > > > > * TODO: debug and remove the workaround. > > > > > > */ > > > > > > enum { > > > > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > > > > }; > > > > > > > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > > > > #define VHOST_SCSI_MAX_VQ 128 > > > > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > > > > > > > struct vhost_scsi { > > > > > > /* Protected by vhost_scsi->dev.mutex */ > > > > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > > > > + > > > > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > > > > + > > > > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > > > > > > > Looking at this code, there are just so many locks now. > > > > > This does not make me feel safe :) > > > > > At least, please document lock nesting. > > > > > > > > > > > > > Do you see a real problem? > > > > > > Complexity is a real problem. My head already spins. No I don't see a > > > bug, but we need to simplify locking. > > > > > > And I think I see a nice way to do this: > > > 1. move away from a global work to per-event work - so no list > > > 2. remove dynamic allocation of events - so no events_nr > > > 3. set/test overrun flag under the appropriate vq mutex > > > > > > I think that's ideal. But we can move there in small steps. As a first > > > step - why can't we always take the vq mutex lock and drop > > > vs_events_lock? > > > > There are really different ways to solve the same problem. You are > > welcome to implement you ideas. > > I have a problem with this patch. It just seems too tricky, > while it is really trying to do a very simple thing. How tricky is it? > There's new lock nesting. I used dev mutex in the past, but it does not work because it introduces a deadlock in vhost_scsi_set_endpoint. vs_events is a per device stat, so I think it deserves a per device lock instead of a per queue lock. > There are new places which do lock, read > data, unlock, use data. With vs_events_lock? > There's new RCU. It is not a *new* RCU. It uses the existing one vq->private_data. > All this feels fragile, and > will be hard to maintain even though I don't see bugs. > So I would prefer it if you tried to simplify > this code making it easier to maintain. I sent some ideas on how to > do this, but if you like, do it some other way. I think it is pretty simple already. > I also made some comments about the coding style but these > are less important (except goto again I guess) if these > will be te only issues, I'd say we can apply. > > > > > > > > + bool vs_events_dropped; /* any missed events */ > > > > > > + int vs_events_nr; /* num of pending events */ > > > > > > }; > > > > > > > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > > return ret; > > > > > > } > > > > > > > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > > > > +{ > > > > > > + bool ret; > > > > > > + > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > + ret = vs->vs_events_dropped; > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > + > > > > > > + return ret; > > > > > > +} > > > > > > + > > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > > { > > > > > > return 1; > > > > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > > > > return 0; > > > > > > } > > > > > > > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > > > > +{ > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > + vs->vs_events_nr--; > > > > > > + kfree(evt); > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > +} > > > > > > + > > > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > > > > + u32 event, u32 reason) > > > > > > +{ > > > > > > + struct tcm_vhost_evt *evt; > > > > > > + > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > > > > + vs->vs_events_dropped = true; > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > + return NULL; > > > > > > + } > > > > > > + > > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > > > > > > > I think kzalloc not needed here, you init all fields. > > > > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > > > > > > So that is 4 bytes just init them when you set rest of lun. > > > > It is not in the fast path. You can do it this way but not a must. > > I think that's a bit cleaner than relying on kzalloc to zero-initialize. I think it is a bit shorter, 4 lines saved. > > > > > Also, this basically means if backend does plug/unplug very quickly, > > > > > we start consuming kernel memory without a limit. Not good. > > > > > How about we allocate the event as part of target? > > > > > There shouldn't ever be more than one hotplug > > > > > event in flight per target, right? > > > > > > > > It is limited by VHOST_SCSI_MAX_EVENT. > > > > > > OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the > > > way? I don't see it in spec or did I miss it? It seems it's best not > > > to lost events as long as there are descriptors in the event vq. > > > > You wanted this. > > Yes I wanted some limit on the amount of memory we use up :) So we limited this already. > > > > > > > > > > > + if (evt) { > > > > > > > > > > Let's do clean error handling here and above: > > > > > if (!evt) > > > > > goto err; > > > > > > > > We can you do in err? You simply unlock and return. Why bother? How > > > > cleaner it will be. > > > > > > There's another error above and we'll share code. > > > It's not a big issue, just a bit nicer IMHO. > > > > Can't we focus on real issues other than this? > > Yes style issues are not the most important here. > > > > > > > + evt->event.event = event; > > > > > > + evt->event.reason = reason; > > > > > > + vs->vs_events_nr++; > > > > > > + } > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > + > > > > > > + return evt; > > > > > > +} > > > > > > + > > > > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > { > > > > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > kfree(tv_cmd); > > > > > > } > > > > > > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > > > > + struct tcm_vhost_evt *evt) > > > > > > +{ > > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > > > > + struct virtio_scsi_event *event = &evt->event; > > > > > > + struct virtio_scsi_event __user *eventp; > > > > > > + unsigned out, in; > > > > > > + int head, ret; > > > > > > + > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > + return; > > > > > > + > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > + mutex_lock(&vq->mutex); > > > > > > +again: > > > > > > + vhost_disable_notify(&vs->dev, vq); > > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > > > > + NULL, NULL); > > > > > > + if (head < 0) { > > > > > > + vs->vs_events_dropped = true; > > > > > > + goto out; > > > > > > + } > > > > > > + if (head == vq->num) { > > > > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > > > > + goto again; > > > > > > > > > > Please write loops with while() or for(). > > > > > Not with goto. goto is for error handling. > > > > > > > > This makes extra indention which is more ugly. > > > > > > I don't care. No loops with goto and that's a hard rule. > > > > It is not a loop. > > If same code repeats, it's a loop. We really need here is to call vhost_get_vq_desc once. It's not a loop like other places to use while() or for() with vhost_get_vq_desc. > > > > > > + vs->vs_events_dropped = true; > > > > > > + goto out; > > > > > > + } > > > > > > + > > > > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > > > > + vq->iov[out].iov_len); > > > > > > + vs->vs_events_dropped = true; > > > > > > + goto out; > > > > > > + } > > > > > > + > > > > > > + if (vs->vs_events_dropped) { > > > > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > > > > + vs->vs_events_dropped = false; > > > > > > + } > > > > > > + > > > > > > + eventp = vq->iov[out].iov_base; > > > > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > > > > + if (!ret) > > > > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > > > > + else > > > > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > > > > +out: > > > > > > + mutex_unlock(&vq->mutex); > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > +} > > > > > > + > > > > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > > > > +{ > > > > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > > > > + vs_event_work); > > > > > > + struct tcm_vhost_evt *evt; > > > > > > + struct llist_node *llnode; > > > > > > + > > > > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > > > > > > > The assumption is that this is slow path thing, no need to worry about > > > > > speed, yes? so let's do simple list_add, list_del etc. > > > > > > > > Why it is simpler? > > > > > > simple list_ with a lock is easier to use correctly. > > > > You need to use it correctly anyway for cmd. Why you want a lock here? > > > > > > We are using llist for cmd. Why it is better using > > > > one for evt and the different for cmd? Similar code makes people easier > > > > to read. > > > > > > OK fair enough. But my idea is above to use a work > > > structure instead of a list of events. > > > This way we don't need extra list at all, > > > no new locks, nothing. > > > > Again, You are welcome to implement your ideas. > > I might but I'm otherwise occupied at the moment. > > > > > > > + while (llnode) { > > > > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > > > > + llnode = llist_next(llnode); > > > > > > + tcm_vhost_do_evt_work(vs, evt); > > > > > > + tcm_vhost_free_evt(vs, evt); > > > > > > + } > > > > > > +} > > > > > > + > > > > > > /* Fill in status and signal that we are done processing this command > > > > > > * > > > > > > * This is scheduled in the vhost work queue so we are called with the owner > > > > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > > > > } > > > > > > > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > > > > + struct se_lun *lun, u32 event, u32 reason) > > > > > > +{ > > > > > > + struct tcm_vhost_evt *evt; > > > > > > + > > > > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > > > > + if (!evt) > > > > > > + return -ENOMEM; > > > > > > + > > > > > > + if (tpg && lun) { > > > > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > > > > + evt->event.lun[0] = 0x01; > > > > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > > > > + if (lun->unpacked_lun >= 256) > > > > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > > > > + } > > > > > > + > > > > > > + llist_add(&evt->list, &vs->vs_event_list); > > > > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > > > > + > > > > > > + return 0; > > > > > > +} > > > > > > + > > > > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > > > > { > > > > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > > > > + poll.work); > > > > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > > > > + > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > + return; > > > > > > > > > > Again just drop this check. > > > > > > > > Why? > > > > > > Same as above. > > > Because it's not safe to do this outside the vq mutex. > > > > Is it also not safe to the same in -net? > > It's safe there since we dereference the pointer. > > > Why is it not safe? > > Because you don't dereference the pointer. > > > > > > > + > > > > > > + if (tcm_vhost_check_events_dropped(vs)) > > > > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > > > > + > > > > > > } > > > > > > > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > > > > vhost_scsi_flush_vq(vs, i); > > > > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > > > > } > > > > > > > > > > > > /* > > > > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > > > > return -EEXIST; > > > > > > } > > > > > > tv_tpg->tv_tpg_vhost_count++; > > > > > > + tv_tpg->vhost_scsi = vs; > > > > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > > > > smp_mb__after_atomic_inc(); > > > > > > match = true; > > > > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > > > > goto err_tpg; > > > > > > } > > > > > > tv_tpg->tv_tpg_vhost_count--; > > > > > > + tv_tpg->vhost_scsi = NULL; > > > > > > vs->vs_tpg[target] = NULL; > > > > > > match = true; > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > > > > return -ENOMEM; > > > > > > > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > > > > + > > > > > > + s->vs_events_nr = 0; > > > > > > + s->vs_events_dropped = false; > > > > > > + mutex_init(&s->vs_events_lock); > > > > > > > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > > > > return "Unknown"; > > > > > > } > > > > > > > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > > > > + struct se_lun *lun, bool plug) > > > > > > +{ > > > > > > + > > > > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > > > > + u32 reason; > > > > > > + > > > > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > > > > + vs = tpg->vhost_scsi; > > > > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > > > > + if (!vs) > > > > > > + return -EOPNOTSUPP; > > > > > > > > > > Why EOPNOTSUPP? When does this happen? > > > > > > > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > > > > guest. > > > > > > So ENODEV or something. > > > > okay. > > > > > > > > + > > > > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > > > > + return -EOPNOTSUPP; > > > > > > + > > > > > > + if (plug) > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > > > > + else > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > > > > + > > > > > > > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > > > > NULL now. Why is this safe? > > > > > > > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > > > > + reason); > > > > > > +} > > > > > > + > > > > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > +{ > > > > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > > > > +} > > > > > > + > > > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > +{ > > > > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > > > > +} > > > > > > + > > > > > > > > > > What are these wrappers for? Seem useless ... > > > > > > > > > > > > It make people easier to understand what's the true and false is about > > > > in tcm_vhost_do_plug. > > > > > > So just pass in the reason and not a bool. > > > When readers see > > > VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED > > > they will know what is going on. > > > > How can you make the connection that VIRTIO_SCSI_EVT_RESET_RESCAN is > > hotplug and VIRTIO_SCSI_EVT_RESET_REMOVED is hot-unplug at first glance? > > > > > > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > struct se_lun *lun) > > > > > > { > > > > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > tv_tpg->tv_tpg_port_count++; > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > > > > + > > > > > > return 0; > > > > > > } > > > > > > > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > > > > - struct se_lun *se_lun) > > > > > > + struct se_lun *lun) > > > > > > { > > > > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > > > > struct tcm_vhost_tpg, se_tpg); > > > > > > - > > > > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > > > > tv_tpg->tv_tpg_port_count--; > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > + > > > > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > > > > } > > > > > > > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > > > > index 1d2ae7a..94e9ee53 100644 > > > > > > --- a/drivers/vhost/tcm_vhost.h > > > > > > +++ b/drivers/vhost/tcm_vhost.h > > > > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > > > > struct se_node_acl se_node_acl; > > > > > > }; > > > > > > > > > > > > +struct vhost_scsi; > > > > > > struct tcm_vhost_tpg { > > > > > > /* Vhost port target portal group tag for TCM */ > > > > > > u16 tport_tpgt; > > > > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > > > > struct tcm_vhost_tport *tport; > > > > > > /* Returned by tcm_vhost_make_tpg() */ > > > > > > struct se_portal_group se_tpg; > > > > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > > > > + struct vhost_scsi *vhost_scsi; > > > > > > }; > > > > > > > > > > > > struct tcm_vhost_tport { > > > > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > > > > struct se_wwn tport_wwn; > > > > > > }; > > > > > > > > > > > > +struct tcm_vhost_evt { > > > > > > + /* virtio_scsi event */ > > > > > > + struct virtio_scsi_event event; > > > > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > > > > + struct llist_node list; > > > > > > +}; > > > > > > + > > > > > > /* > > > > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > > > > * linux/vhost.h (user-space) for now.. > > > > > > -- > > > > > > 1.8.1.4 > > > > > > > > -- > > > > Asias > > > > -- > > Asias -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-22 9:20 ` Asias He @ 2013-04-22 13:17 ` Michael S. Tsirkin 2013-04-22 15:22 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-22 13:17 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Mon, Apr 22, 2013 at 05:20:24PM +0800, Asias He wrote: > On Sat, Apr 20, 2013 at 10:01:31PM +0300, Michael S. Tsirkin wrote: > > On Fri, Apr 19, 2013 at 10:34:10AM +0800, Asias He wrote: > > > On Thu, Apr 18, 2013 at 11:21:54AM +0300, Michael S. Tsirkin wrote: > > > > On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > > > > > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > > > > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > > > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > > > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > > > > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > > > > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > > > > > LUN in guest. > > > > > > > > > > > > > > Changes in v7: > > > > > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > > > > > > > > > Changes in v6: > > > > > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > > > > > > > > > Changes in v5: > > > > > > > - Switch to int from u64 to vs_events_nr > > > > > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > > > > > - Do not nest dev mutex within vq mutex > > > > > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > > > > > - Rebase to target/master > > > > > > > > > > > > > > Changes in v4: > > > > > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > > > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > > > > > > > > > Changes in v3: > > > > > > > - Separate the bug fix to another thread > > > > > > > > > > > > > > Changes in v2: > > > > > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > > > > > - Fix racing of vs_events_nr > > > > > > > - Add flush fix patch to this series > > > > > > > > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > > > --- > > > > > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > > > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > > > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > > > index 8f05528..da2021b 100644 > > > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > > > @@ -66,11 +66,13 @@ enum { > > > > > > > * TODO: debug and remove the workaround. > > > > > > > */ > > > > > > > enum { > > > > > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > > > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > > > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > > > > > }; > > > > > > > > > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > > > > > #define VHOST_SCSI_MAX_VQ 128 > > > > > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > > > > > > > > > struct vhost_scsi { > > > > > > > /* Protected by vhost_scsi->dev.mutex */ > > > > > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > > > > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > > > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > > > > > + > > > > > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > > > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > > > > > + > > > > > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > > > > > > > > > Looking at this code, there are just so many locks now. > > > > > > This does not make me feel safe :) > > > > > > At least, please document lock nesting. > > > > > > > > > > > > > > > > Do you see a real problem? > > > > > > > > Complexity is a real problem. My head already spins. No I don't see a > > > > bug, but we need to simplify locking. > > > > > > > > And I think I see a nice way to do this: > > > > 1. move away from a global work to per-event work - so no list > > > > 2. remove dynamic allocation of events - so no events_nr > > > > 3. set/test overrun flag under the appropriate vq mutex > > > > > > > > I think that's ideal. But we can move there in small steps. As a first > > > > step - why can't we always take the vq mutex lock and drop > > > > vs_events_lock? > > > > > > There are really different ways to solve the same problem. You are > > > welcome to implement you ideas. > > > > I have a problem with this patch. It just seems too tricky, > > while it is really trying to do a very simple thing. > > How tricky is it? It has its own locking scheme which seems unwarranted. Find a way to use the existing locking please. > > There's new lock nesting. > > I used dev mutex in the past, but it does not work because it introduces > a deadlock in vhost_scsi_set_endpoint. vs_events is a per device stat, > so I think it deserves a per device lock instead of a per queue lock. We don't add a lock per field, it just does not make sense. > > There are new places which do lock, read > > data, unlock, use data. > > With vs_events_lock? With this patch. > > There's new RCU. > > It is not a *new* RCU. It uses the existing one vq->private_data. Incorrectly. It should be used on the workqueue where it's dereferenced. > > All this feels fragile, and > > will be hard to maintain even though I don't see bugs. > > > So I would prefer it if you tried to simplify > > this code making it easier to maintain. I sent some ideas on how to > > do this, but if you like, do it some other way. > > I think it is pretty simple already. Here, we disagree. And since I'm one of the people who gets to maintain this, it's important that I'm comfortable with the code, and don't keep worrying "is this racy? is there a deadlock?". This patch makes me uncomfortable. Could you address this please? > > I also made some comments about the coding style but these > > are less important (except goto again I guess) if these > > will be te only issues, I'd say we can apply. > > > > > > > > > > > + bool vs_events_dropped; /* any missed events */ > > > > > > > + int vs_events_nr; /* num of pending events */ > > > > > > > }; > > > > > > > > > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > > > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > > > return ret; > > > > > > > } > > > > > > > > > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > > > > > +{ > > > > > > > + bool ret; > > > > > > > + > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > + ret = vs->vs_events_dropped; > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > + > > > > > > > + return ret; > > > > > > > +} > > > > > > > + > > > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > > > { > > > > > > > return 1; > > > > > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > > > > > return 0; > > > > > > > } > > > > > > > > > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > > > > > +{ > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > + vs->vs_events_nr--; > > > > > > > + kfree(evt); > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > +} > > > > > > > + > > > > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > > > > > + u32 event, u32 reason) > > > > > > > +{ > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > + > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > > > > > + vs->vs_events_dropped = true; > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > + return NULL; > > > > > > > + } > > > > > > > + > > > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > > > > > > > > > I think kzalloc not needed here, you init all fields. > > > > > > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > > > > > > > > So that is 4 bytes just init them when you set rest of lun. > > > > > > It is not in the fast path. You can do it this way but not a must. > > > > I think that's a bit cleaner than relying on kzalloc to zero-initialize. > > I think it is a bit shorter, 4 lines saved. OTOH you don't have to think "what about the rest of the bytes?" then hunt through the code and go "ah, it's kzalloc". Looks it's a style issue but I prefer explicit initialization. > > > > > > Also, this basically means if backend does plug/unplug very quickly, > > > > > > we start consuming kernel memory without a limit. Not good. > > > > > > How about we allocate the event as part of target? > > > > > > There shouldn't ever be more than one hotplug > > > > > > event in flight per target, right? > > > > > > > > > > It is limited by VHOST_SCSI_MAX_EVENT. > > > > > > > > OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the > > > > way? I don't see it in spec or did I miss it? It seems it's best not > > > > to lost events as long as there are descriptors in the event vq. > > > > > > You wanted this. > > > > Yes I wanted some limit on the amount of memory we use up :) > > So we limited this already. Yes. But my question is this: what happens if an event is lost? Can guest recover and how? > > > > > > > > > > > > > + if (evt) { > > > > > > > > > > > > Let's do clean error handling here and above: > > > > > > if (!evt) > > > > > > goto err; > > > > > > > > > > We can you do in err? You simply unlock and return. Why bother? How > > > > > cleaner it will be. > > > > > > > > There's another error above and we'll share code. > > > > It's not a big issue, just a bit nicer IMHO. > > > > > > Can't we focus on real issues other than this? > > > > Yes style issues are not the most important here. > > > > > > > > > + evt->event.event = event; > > > > > > > + evt->event.reason = reason; > > > > > > > + vs->vs_events_nr++; > > > > > > > + } > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > + > > > > > > > + return evt; > > > > > > > +} > > > > > > > + > > > > > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > > { > > > > > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > > > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > > kfree(tv_cmd); > > > > > > > } > > > > > > > > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > > > > > + struct tcm_vhost_evt *evt) > > > > > > > +{ > > > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > > > > > + struct virtio_scsi_event *event = &evt->event; > > > > > > > + struct virtio_scsi_event __user *eventp; > > > > > > > + unsigned out, in; > > > > > > > + int head, ret; > > > > > > > + > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > > + return; > > > > > > > + > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > + mutex_lock(&vq->mutex); > > > > > > > +again: > > > > > > > + vhost_disable_notify(&vs->dev, vq); > > > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > > > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > > > > > + NULL, NULL); > > > > > > > + if (head < 0) { > > > > > > > + vs->vs_events_dropped = true; > > > > > > > + goto out; > > > > > > > + } > > > > > > > + if (head == vq->num) { > > > > > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > > > > > + goto again; > > > > > > > > > > > > Please write loops with while() or for(). > > > > > > Not with goto. goto is for error handling. > > > > > > > > > > This makes extra indention which is more ugly. > > > > > > > > I don't care. No loops with goto and that's a hard rule. > > > > > > It is not a loop. > > > > If same code repeats, it's a loop. > > We really need here is to call vhost_get_vq_desc once. It's not a loop > like other places to use while() or for() with vhost_get_vq_desc. Exactly. Like a mutex for example? We only need to lock it once. See __mutex_lock_common - uses a for loop, doesn't it? So fix this please, use while or for or even until. > > > > > > > + vs->vs_events_dropped = true; > > > > > > > + goto out; > > > > > > > + } > > > > > > > + > > > > > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > > > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > > > > > + vq->iov[out].iov_len); > > > > > > > + vs->vs_events_dropped = true; > > > > > > > + goto out; > > > > > > > + } > > > > > > > + > > > > > > > + if (vs->vs_events_dropped) { > > > > > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > > > > > + vs->vs_events_dropped = false; > > > > > > > + } > > > > > > > + > > > > > > > + eventp = vq->iov[out].iov_base; > > > > > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > > > > > + if (!ret) > > > > > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > > > > > + else > > > > > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > > > > > +out: > > > > > > > + mutex_unlock(&vq->mutex); > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > +} > > > > > > > + > > > > > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > > > > > +{ > > > > > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > > > > > + vs_event_work); > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > + struct llist_node *llnode; > > > > > > > + > > > > > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > > > > > > > > > The assumption is that this is slow path thing, no need to worry about > > > > > > speed, yes? so let's do simple list_add, list_del etc. > > > > > > > > > > Why it is simpler? > > > > > > > > simple list_ with a lock is easier to use correctly. > > > > > > You need to use it correctly anyway for cmd. Why you want a lock here? > > > > > > > > We are using llist for cmd. Why it is better using > > > > > one for evt and the different for cmd? Similar code makes people easier > > > > > to read. > > > > > > > > OK fair enough. But my idea is above to use a work > > > > structure instead of a list of events. > > > > This way we don't need extra list at all, > > > > no new locks, nothing. > > > > > > Again, You are welcome to implement your ideas. > > > > I might but I'm otherwise occupied at the moment. > > > > > > > > > + while (llnode) { > > > > > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > > > > > + llnode = llist_next(llnode); > > > > > > > + tcm_vhost_do_evt_work(vs, evt); > > > > > > > + tcm_vhost_free_evt(vs, evt); > > > > > > > + } > > > > > > > +} > > > > > > > + > > > > > > > /* Fill in status and signal that we are done processing this command > > > > > > > * > > > > > > > * This is scheduled in the vhost work queue so we are called with the owner > > > > > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > > > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > > > > > } > > > > > > > > > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > > > > > + struct se_lun *lun, u32 event, u32 reason) > > > > > > > +{ > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > + > > > > > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > > > > > + if (!evt) > > > > > > > + return -ENOMEM; > > > > > > > + > > > > > > > + if (tpg && lun) { > > > > > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > > > > > + evt->event.lun[0] = 0x01; > > > > > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > > > > > + if (lun->unpacked_lun >= 256) > > > > > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > > > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > > > > > + } > > > > > > > + > > > > > > > + llist_add(&evt->list, &vs->vs_event_list); > > > > > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > > > > > + > > > > > > > + return 0; > > > > > > > +} > > > > > > > + > > > > > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > > > > > { > > > > > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > > > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > > > > > + poll.work); > > > > > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > > > > > + > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > > + return; > > > > > > > > > > > > Again just drop this check. > > > > > > > > > > Why? > > > > > > > > Same as above. > > > > Because it's not safe to do this outside the vq mutex. > > > > > > Is it also not safe to the same in -net? > > > > It's safe there since we dereference the pointer. > > > > > Why is it not safe? > > > > Because you don't dereference the pointer. > > > > > > > > > + > > > > > > > + if (tcm_vhost_check_events_dropped(vs)) > > > > > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > > > > > + > > > > > > > } > > > > > > > > > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > > > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > > > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > > > > > vhost_scsi_flush_vq(vs, i); > > > > > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > > > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > > > > > } > > > > > > > > > > > > > > /* > > > > > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > > > > > return -EEXIST; > > > > > > > } > > > > > > > tv_tpg->tv_tpg_vhost_count++; > > > > > > > + tv_tpg->vhost_scsi = vs; > > > > > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > > > > > smp_mb__after_atomic_inc(); > > > > > > > match = true; > > > > > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > > > > > goto err_tpg; > > > > > > > } > > > > > > > tv_tpg->tv_tpg_vhost_count--; > > > > > > > + tv_tpg->vhost_scsi = NULL; > > > > > > > vs->vs_tpg[target] = NULL; > > > > > > > match = true; > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > > > > > return -ENOMEM; > > > > > > > > > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > > > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > > > > > + > > > > > > > + s->vs_events_nr = 0; > > > > > > > + s->vs_events_dropped = false; > > > > > > > + mutex_init(&s->vs_events_lock); > > > > > > > > > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > > > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > > > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > > > > > return "Unknown"; > > > > > > > } > > > > > > > > > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > > > > > + struct se_lun *lun, bool plug) > > > > > > > +{ > > > > > > > + > > > > > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > > > > > + u32 reason; > > > > > > > + > > > > > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > > > > > + vs = tpg->vhost_scsi; > > > > > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > > > > > + if (!vs) > > > > > > > + return -EOPNOTSUPP; > > > > > > > > > > > > Why EOPNOTSUPP? When does this happen? > > > > > > > > > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > > > > > guest. > > > > > > > > So ENODEV or something. > > > > > > okay. > > > > > > > > > > + > > > > > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > > > > > + return -EOPNOTSUPP; > > > > > > > + > > > > > > > + if (plug) > > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > > > > > + else > > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > > > > > + > > > > > > > > > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > > > > > NULL now. Why is this safe? > > > > > > > > > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > > > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > > > > > + reason); > > > > > > > +} > > > > > > > + > > > > > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > > +{ > > > > > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > > > > > +} > > > > > > > + > > > > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > > +{ > > > > > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > > > > > +} > > > > > > > + > > > > > > > > > > > > What are these wrappers for? Seem useless ... > > > > > > > > > > > > > > > It make people easier to understand what's the true and false is about > > > > > in tcm_vhost_do_plug. > > > > > > > > So just pass in the reason and not a bool. > > > > When readers see > > > > VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED > > > > they will know what is going on. > > > > > > How can you make the connection that VIRTIO_SCSI_EVT_RESET_RESCAN is > > > hotplug and VIRTIO_SCSI_EVT_RESET_REMOVED is hot-unplug at first glance? > > > > > > > > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > > struct se_lun *lun) > > > > > > > { > > > > > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > > tv_tpg->tv_tpg_port_count++; > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > > > > > + > > > > > > > return 0; > > > > > > > } > > > > > > > > > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > > > > > - struct se_lun *se_lun) > > > > > > > + struct se_lun *lun) > > > > > > > { > > > > > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > > > > > struct tcm_vhost_tpg, se_tpg); > > > > > > > - > > > > > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > > > > > tv_tpg->tv_tpg_port_count--; > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > + > > > > > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > > > > > } > > > > > > > > > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > > > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > > > > > index 1d2ae7a..94e9ee53 100644 > > > > > > > --- a/drivers/vhost/tcm_vhost.h > > > > > > > +++ b/drivers/vhost/tcm_vhost.h > > > > > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > > > > > struct se_node_acl se_node_acl; > > > > > > > }; > > > > > > > > > > > > > > +struct vhost_scsi; > > > > > > > struct tcm_vhost_tpg { > > > > > > > /* Vhost port target portal group tag for TCM */ > > > > > > > u16 tport_tpgt; > > > > > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > > > > > struct tcm_vhost_tport *tport; > > > > > > > /* Returned by tcm_vhost_make_tpg() */ > > > > > > > struct se_portal_group se_tpg; > > > > > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > > > > > + struct vhost_scsi *vhost_scsi; > > > > > > > }; > > > > > > > > > > > > > > struct tcm_vhost_tport { > > > > > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > > > > > struct se_wwn tport_wwn; > > > > > > > }; > > > > > > > > > > > > > > +struct tcm_vhost_evt { > > > > > > > + /* virtio_scsi event */ > > > > > > > + struct virtio_scsi_event event; > > > > > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > > > > > + struct llist_node list; > > > > > > > +}; > > > > > > > + > > > > > > > /* > > > > > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > > > > > * linux/vhost.h (user-space) for now.. > > > > > > > -- > > > > > > > 1.8.1.4 > > > > > > > > > > -- > > > > > Asias > > > > > > -- > > > Asias > > -- > Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-22 13:17 ` Michael S. Tsirkin @ 2013-04-22 15:22 ` Asias He 2013-04-23 4:18 ` Rusty Russell 0 siblings, 1 reply; 37+ messages in thread From: Asias He @ 2013-04-22 15:22 UTC (permalink / raw) To: Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Mon, Apr 22, 2013 at 04:17:04PM +0300, Michael S. Tsirkin wrote: > On Mon, Apr 22, 2013 at 05:20:24PM +0800, Asias He wrote: > > On Sat, Apr 20, 2013 at 10:01:31PM +0300, Michael S. Tsirkin wrote: > > > On Fri, Apr 19, 2013 at 10:34:10AM +0800, Asias He wrote: > > > > On Thu, Apr 18, 2013 at 11:21:54AM +0300, Michael S. Tsirkin wrote: > > > > > On Thu, Apr 18, 2013 at 04:59:08PM +0800, Asias He wrote: > > > > > > On Thu, Apr 18, 2013 at 10:34:29AM +0300, Michael S. Tsirkin wrote: > > > > > > > On Thu, Apr 18, 2013 at 09:05:54AM +0800, Asias He wrote: > > > > > > > > In commit 365a7150094 ([SCSI] virtio-scsi: hotplug support for > > > > > > > > virtio-scsi), hotplug support is added to virtio-scsi. > > > > > > > > > > > > > > > > This patch adds hotplug and hotunplug support to tcm_vhost. > > > > > > > > > > > > > > > > You can create or delete a LUN in targetcli to hotplug or hotunplug a > > > > > > > > LUN in guest. > > > > > > > > > > > > > > > > Changes in v7: > > > > > > > > - Add vhost_work_flush for vs->vs_event_work to this series > > > > > > > > > > > > > > > > Changes in v6: > > > > > > > > - Pass tcm_vhost_evt to tcm_vhost_do_evt_work > > > > > > > > > > > > > > > > Changes in v5: > > > > > > > > - Switch to int from u64 to vs_events_nr > > > > > > > > - Set s->vs_events_dropped flag in tcm_vhost_allocate_evt > > > > > > > > - Do not nest dev mutex within vq mutex > > > > > > > > - Use vs_events_lock to protect vs_events_dropped and vs_events_nr > > > > > > > > - Rebase to target/master > > > > > > > > > > > > > > > > Changes in v4: > > > > > > > > - Drop tcm_vhost_check_endpoint in tcm_vhost_send_evt > > > > > > > > - Add tcm_vhost_check_endpoint in vhost_scsi_evt_handle_kick > > > > > > > > > > > > > > > > Changes in v3: > > > > > > > > - Separate the bug fix to another thread > > > > > > > > > > > > > > > > Changes in v2: > > > > > > > > - Remove code duplication in tcm_vhost_{hotplug,hotunplug} > > > > > > > > - Fix racing of vs_events_nr > > > > > > > > - Add flush fix patch to this series > > > > > > > > > > > > > > > > Signed-off-by: Asias He <asias@redhat.com> > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > > > > --- > > > > > > > > drivers/vhost/tcm_vhost.c | 208 +++++++++++++++++++++++++++++++++++++++++++++- > > > > > > > > drivers/vhost/tcm_vhost.h | 10 +++ > > > > > > > > 2 files changed, 214 insertions(+), 4 deletions(-) > > > > > > > > > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c > > > > > > > > index 8f05528..da2021b 100644 > > > > > > > > --- a/drivers/vhost/tcm_vhost.c > > > > > > > > +++ b/drivers/vhost/tcm_vhost.c > > > > > > > > @@ -66,11 +66,13 @@ enum { > > > > > > > > * TODO: debug and remove the workaround. > > > > > > > > */ > > > > > > > > enum { > > > > > > > > - VHOST_SCSI_FEATURES = VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX) > > > > > > > > + VHOST_SCSI_FEATURES = (VHOST_FEATURES & (~VIRTIO_RING_F_EVENT_IDX)) | > > > > > > > > + (1ULL << VIRTIO_SCSI_F_HOTPLUG) > > > > > > > > }; > > > > > > > > > > > > > > > > #define VHOST_SCSI_MAX_TARGET 256 > > > > > > > > #define VHOST_SCSI_MAX_VQ 128 > > > > > > > > +#define VHOST_SCSI_MAX_EVENT 128 > > > > > > > > > > > > > > > > struct vhost_scsi { > > > > > > > > /* Protected by vhost_scsi->dev.mutex */ > > > > > > > > @@ -82,6 +84,13 @@ struct vhost_scsi { > > > > > > > > > > > > > > > > struct vhost_work vs_completion_work; /* cmd completion work item */ > > > > > > > > struct llist_head vs_completion_list; /* cmd completion queue */ > > > > > > > > + > > > > > > > > + struct vhost_work vs_event_work; /* evt injection work item */ > > > > > > > > + struct llist_head vs_event_list; /* evt injection queue */ > > > > > > > > + > > > > > > > > + struct mutex vs_events_lock; /* protect vs_events_dropped,events_nr */ > > > > > > > > > > > > > > Looking at this code, there are just so many locks now. > > > > > > > This does not make me feel safe :) > > > > > > > At least, please document lock nesting. > > > > > > > > > > > > > > > > > > > Do you see a real problem? > > > > > > > > > > Complexity is a real problem. My head already spins. No I don't see a > > > > > bug, but we need to simplify locking. > > > > > > > > > > And I think I see a nice way to do this: > > > > > 1. move away from a global work to per-event work - so no list > > > > > 2. remove dynamic allocation of events - so no events_nr > > > > > 3. set/test overrun flag under the appropriate vq mutex > > > > > > > > > > I think that's ideal. But we can move there in small steps. As a first > > > > > step - why can't we always take the vq mutex lock and drop > > > > > vs_events_lock? > > > > > > > > There are really different ways to solve the same problem. You are > > > > welcome to implement you ideas. > > > > > > I have a problem with this patch. It just seems too tricky, > > > while it is really trying to do a very simple thing. > > > > How tricky is it? > > It has its own locking scheme which seems unwarranted. > Find a way to use the existing locking please. Why is this tricky? > > > There's new lock nesting. > > > > I used dev mutex in the past, but it does not work because it introduces > > a deadlock in vhost_scsi_set_endpoint. vs_events is a per device stat, > > so I think it deserves a per device lock instead of a per queue lock. > > We don't add a lock per field, it just does not make sense. > > > > There are new places which do lock, read > > > data, unlock, use data. > > > > With vs_events_lock? > > With this patch. Please be explicit. > > > There's new RCU. > > > > It is not a *new* RCU. It uses the existing one vq->private_data. > > Incorrectly. It should be used on the workqueue where it's > dereferenced. > > > > All this feels fragile, and > > > will be hard to maintain even though I don't see bugs. > > > > > So I would prefer it if you tried to simplify > > > this code making it easier to maintain. I sent some ideas on how to > > > do this, but if you like, do it some other way. > > > > I think it is pretty simple already. > > Here, we disagree. Where does 'we' come from? > And since I'm one of the people who gets to maintain > this, it's important that I'm comfortable with the code, and > don't keep worrying "is this racy? is there a deadlock?". > This patch makes me uncomfortable. Could you address this please? No, I can not until I really see the real problem. You make me uncomfortable as well. > > > I also made some comments about the coding style but these > > > are less important (except goto again I guess) if these > > > will be te only issues, I'd say we can apply. > > > > > > > > > > > > > > + bool vs_events_dropped; /* any missed events */ > > > > > > > > + int vs_events_nr; /* num of pending events */ > > > > > > > > }; > > > > > > > > > > > > > > > > /* Local pointer to allocated TCM configfs fabric module */ > > > > > > > > @@ -129,6 +138,17 @@ static bool tcm_vhost_check_endpoint(struct vhost_virtqueue *vq) > > > > > > > > return ret; > > > > > > > > } > > > > > > > > > > > > > > > > +static bool tcm_vhost_check_events_dropped(struct vhost_scsi *vs) > > > > > > > > +{ > > > > > > > > + bool ret; > > > > > > > > + > > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > > + ret = vs->vs_events_dropped; > > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > > + > > > > > > > > + return ret; > > > > > > > > +} > > > > > > > > + > > > > > > > > static int tcm_vhost_check_true(struct se_portal_group *se_tpg) > > > > > > > > { > > > > > > > > return 1; > > > > > > > > @@ -379,6 +399,37 @@ static int tcm_vhost_queue_tm_rsp(struct se_cmd *se_cmd) > > > > > > > > return 0; > > > > > > > > } > > > > > > > > > > > > > > > > +static void tcm_vhost_free_evt(struct vhost_scsi *vs, struct tcm_vhost_evt *evt) > > > > > > > > +{ > > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > > + vs->vs_events_nr--; > > > > > > > > + kfree(evt); > > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static struct tcm_vhost_evt *tcm_vhost_allocate_evt(struct vhost_scsi *vs, > > > > > > > > + u32 event, u32 reason) > > > > > > > > +{ > > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > > + > > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > > + if (vs->vs_events_nr > VHOST_SCSI_MAX_EVENT) { > > > > > > > > + vs->vs_events_dropped = true; > > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > > + return NULL; > > > > > > > > + } > > > > > > > > + > > > > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > > > > > > > > > > > > > > I think kzalloc not needed here, you init all fields. > > > > > > > > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > > > > > > > > > > So that is 4 bytes just init them when you set rest of lun. > > > > > > > > It is not in the fast path. You can do it this way but not a must. > > > > > > I think that's a bit cleaner than relying on kzalloc to zero-initialize. > > > > I think it is a bit shorter, 4 lines saved. > > OTOH you don't have to think "what about the rest of the bytes?" then > hunt through the code and go "ah, it's kzalloc". > Looks it's a style issue but I prefer explicit initialization. It is just make none sense to argue about this. If you really want you can make patches to remove all the lines where use kzalloc because it is cleaner. > > > > > > > Also, this basically means if backend does plug/unplug very quickly, > > > > > > > we start consuming kernel memory without a limit. Not good. > > > > > > > How about we allocate the event as part of target? > > > > > > > There shouldn't ever be more than one hotplug > > > > > > > event in flight per target, right? > > > > > > > > > > > > It is limited by VHOST_SCSI_MAX_EVENT. > > > > > > > > > > OK I missed that. But where does VHOST_SCSI_MAX_EVENT come from by the > > > > > way? I don't see it in spec or did I miss it? It seems it's best not > > > > > to lost events as long as there are descriptors in the event vq. > > > > > > > > You wanted this. > > > > > > Yes I wanted some limit on the amount of memory we use up :) > > > > So we limited this already. > > Yes. But my question is this: what happens if an event is lost? > Can guest recover and how? VIRTIO_SCSI_T_NO_EVENT. > > > > > > > > > > > > > > > + if (evt) { > > > > > > > > > > > > > > Let's do clean error handling here and above: > > > > > > > if (!evt) > > > > > > > goto err; > > > > > > > > > > > > We can you do in err? You simply unlock and return. Why bother? How > > > > > > cleaner it will be. > > > > > > > > > > There's another error above and we'll share code. > > > > > It's not a big issue, just a bit nicer IMHO. > > > > > > > > Can't we focus on real issues other than this? > > > > > > Yes style issues are not the most important here. > > > > > > > > > > > + evt->event.event = event; > > > > > > > > + evt->event.reason = reason; > > > > > > > > + vs->vs_events_nr++; > > > > > > > > + } > > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > > + > > > > > > > > + return evt; > > > > > > > > +} > > > > > > > > + > > > > > > > > static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > > > { > > > > > > > > struct se_cmd *se_cmd = &tv_cmd->tvc_se_cmd; > > > > > > > > @@ -397,6 +448,75 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd *tv_cmd) > > > > > > > > kfree(tv_cmd); > > > > > > > > } > > > > > > > > > > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > > > > > > > > + struct tcm_vhost_evt *evt) > > > > > > > > +{ > > > > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > > > > > > > > + struct virtio_scsi_event *event = &evt->event; > > > > > > > > + struct virtio_scsi_event __user *eventp; > > > > > > > > + unsigned out, in; > > > > > > > > + int head, ret; > > > > > > > > + > > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > > > + return; > > > > > > > > + > > > > > > > > + mutex_lock(&vs->vs_events_lock); > > > > > > > > + mutex_lock(&vq->mutex); > > > > > > > > +again: > > > > > > > > + vhost_disable_notify(&vs->dev, vq); > > > > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > > > > > > > > + ARRAY_SIZE(vq->iov), &out, &in, > > > > > > > > + NULL, NULL); > > > > > > > > + if (head < 0) { > > > > > > > > + vs->vs_events_dropped = true; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + if (head == vq->num) { > > > > > > > > + if (vhost_enable_notify(&vs->dev, vq)) > > > > > > > > + goto again; > > > > > > > > > > > > > > Please write loops with while() or for(). > > > > > > > Not with goto. goto is for error handling. > > > > > > > > > > > > This makes extra indention which is more ugly. > > > > > > > > > > I don't care. No loops with goto and that's a hard rule. > > > > > > > > It is not a loop. > > > > > > If same code repeats, it's a loop. > > > > We really need here is to call vhost_get_vq_desc once. It's not a loop > > like other places to use while() or for() with vhost_get_vq_desc. > > Exactly. Like a mutex for example? We only need to > lock it once. See __mutex_lock_common - uses a for loop, > doesn't it? So what? > So fix this please, use while or for or even until. I do not think it is necessary. > > > > > > > > + vs->vs_events_dropped = true; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > + if ((vq->iov[out].iov_len != sizeof(struct virtio_scsi_event))) { > > > > > > > > + vq_err(vq, "Expecting virtio_scsi_event, got %zu bytes\n", > > > > > > > > + vq->iov[out].iov_len); > > > > > > > > + vs->vs_events_dropped = true; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > + if (vs->vs_events_dropped) { > > > > > > > > + event->event |= VIRTIO_SCSI_T_EVENTS_MISSED; > > > > > > > > + vs->vs_events_dropped = false; > > > > > > > > + } > > > > > > > > + > > > > > > > > + eventp = vq->iov[out].iov_base; > > > > > > > > + ret = __copy_to_user(eventp, event, sizeof(*event)); > > > > > > > > + if (!ret) > > > > > > > > + vhost_add_used_and_signal(&vs->dev, vq, head, 0); > > > > > > > > + else > > > > > > > > + vq_err(vq, "Faulted on tcm_vhost_send_event\n"); > > > > > > > > +out: > > > > > > > > + mutex_unlock(&vq->mutex); > > > > > > > > + mutex_unlock(&vs->vs_events_lock); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static void tcm_vhost_evt_work(struct vhost_work *work) > > > > > > > > +{ > > > > > > > > + struct vhost_scsi *vs = container_of(work, struct vhost_scsi, > > > > > > > > + vs_event_work); > > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > > + struct llist_node *llnode; > > > > > > > > + > > > > > > > > + llnode = llist_del_all(&vs->vs_event_list); > > > > > > > > > > > > > > The assumption is that this is slow path thing, no need to worry about > > > > > > > speed, yes? so let's do simple list_add, list_del etc. > > > > > > > > > > > > Why it is simpler? > > > > > > > > > > simple list_ with a lock is easier to use correctly. > > > > > > > > You need to use it correctly anyway for cmd. Why you want a lock here? > > > > > > > > > > We are using llist for cmd. Why it is better using > > > > > > one for evt and the different for cmd? Similar code makes people easier > > > > > > to read. > > > > > > > > > > OK fair enough. But my idea is above to use a work > > > > > structure instead of a list of events. > > > > > This way we don't need extra list at all, > > > > > no new locks, nothing. > > > > > > > > Again, You are welcome to implement your ideas. > > > > > > I might but I'm otherwise occupied at the moment. > > > > > > > > > > > + while (llnode) { > > > > > > > > + evt = llist_entry(llnode, struct tcm_vhost_evt, list); > > > > > > > > + llnode = llist_next(llnode); > > > > > > > > + tcm_vhost_do_evt_work(vs, evt); > > > > > > > > + tcm_vhost_free_evt(vs, evt); > > > > > > > > + } > > > > > > > > +} > > > > > > > > + > > > > > > > > /* Fill in status and signal that we are done processing this command > > > > > > > > * > > > > > > > > * This is scheduled in the vhost work queue so we are called with the owner > > > > > > > > @@ -807,9 +927,42 @@ static void vhost_scsi_ctl_handle_kick(struct vhost_work *work) > > > > > > > > pr_debug("%s: The handling func for control queue.\n", __func__); > > > > > > > > } > > > > > > > > > > > > > > > > +static int tcm_vhost_send_evt(struct vhost_scsi *vs, struct tcm_vhost_tpg *tpg, > > > > > > > > + struct se_lun *lun, u32 event, u32 reason) > > > > > > > > +{ > > > > > > > > + struct tcm_vhost_evt *evt; > > > > > > > > + > > > > > > > > + evt = tcm_vhost_allocate_evt(vs, event, reason); > > > > > > > > + if (!evt) > > > > > > > > + return -ENOMEM; > > > > > > > > + > > > > > > > > + if (tpg && lun) { > > > > > > > > + /* TODO: share lun setup code with virtio-scsi.ko */ > > > > > > > > + evt->event.lun[0] = 0x01; > > > > > > > > + evt->event.lun[1] = tpg->tport_tpgt & 0xFF; > > > > > > > > + if (lun->unpacked_lun >= 256) > > > > > > > > + evt->event.lun[2] = lun->unpacked_lun >> 8 | 0x40 ; > > > > > > > > + evt->event.lun[3] = lun->unpacked_lun & 0xFF; > > > > > > > > + } > > > > > > > > + > > > > > > > > + llist_add(&evt->list, &vs->vs_event_list); > > > > > > > > + vhost_work_queue(&vs->dev, &vs->vs_event_work); > > > > > > > > + > > > > > > > > + return 0; > > > > > > > > +} > > > > > > > > + > > > > > > > > static void vhost_scsi_evt_handle_kick(struct vhost_work *work) > > > > > > > > { > > > > > > > > - pr_debug("%s: The handling func for event queue.\n", __func__); > > > > > > > > + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, > > > > > > > > + poll.work); > > > > > > > > + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); > > > > > > > > + > > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > > > > > > > > + return; > > > > > > > > > > > > > > Again just drop this check. > > > > > > > > > > > > Why? > > > > > > > > > > Same as above. > > > > > Because it's not safe to do this outside the vq mutex. > > > > > > > > Is it also not safe to the same in -net? > > > > > > It's safe there since we dereference the pointer. > > > > > > > Why is it not safe? > > > > > > Because you don't dereference the pointer. > > > > > > > > > > > + > > > > > > > > + if (tcm_vhost_check_events_dropped(vs)) > > > > > > > > + tcm_vhost_send_evt(vs, NULL, NULL, VIRTIO_SCSI_T_NO_EVENT, 0); > > > > > > > > + > > > > > > > > } > > > > > > > > > > > > > > > > static void vhost_scsi_handle_kick(struct vhost_work *work) > > > > > > > > @@ -833,6 +986,7 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) > > > > > > > > for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) > > > > > > > > vhost_scsi_flush_vq(vs, i); > > > > > > > > vhost_work_flush(&vs->dev, &vs->vs_completion_work); > > > > > > > > + vhost_work_flush(&vs->dev, &vs->vs_event_work); > > > > > > > > } > > > > > > > > > > > > > > > > /* > > > > > > > > @@ -891,6 +1045,7 @@ static int vhost_scsi_set_endpoint( > > > > > > > > return -EEXIST; > > > > > > > > } > > > > > > > > tv_tpg->tv_tpg_vhost_count++; > > > > > > > > + tv_tpg->vhost_scsi = vs; > > > > > > > > vs_tpg[tv_tpg->tport_tpgt] = tv_tpg; > > > > > > > > smp_mb__after_atomic_inc(); > > > > > > > > match = true; > > > > > > > > @@ -974,6 +1129,7 @@ static int vhost_scsi_clear_endpoint( > > > > > > > > goto err_tpg; > > > > > > > > } > > > > > > > > tv_tpg->tv_tpg_vhost_count--; > > > > > > > > + tv_tpg->vhost_scsi = NULL; > > > > > > > > vs->vs_tpg[target] = NULL; > > > > > > > > match = true; > > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > @@ -1033,6 +1189,11 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) > > > > > > > > return -ENOMEM; > > > > > > > > > > > > > > > > vhost_work_init(&s->vs_completion_work, vhost_scsi_complete_cmd_work); > > > > > > > > + vhost_work_init(&s->vs_event_work, tcm_vhost_evt_work); > > > > > > > > + > > > > > > > > + s->vs_events_nr = 0; > > > > > > > > + s->vs_events_dropped = false; > > > > > > > > + mutex_init(&s->vs_events_lock); > > > > > > > > > > > > > > > > s->vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick; > > > > > > > > s->vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick; > > > > > > > > @@ -1163,6 +1324,42 @@ static char *tcm_vhost_dump_proto_id(struct tcm_vhost_tport *tport) > > > > > > > > return "Unknown"; > > > > > > > > } > > > > > > > > > > > > > > > > +static int tcm_vhost_do_plug(struct tcm_vhost_tpg *tpg, > > > > > > > > + struct se_lun *lun, bool plug) > > > > > > > > +{ > > > > > > > > + > > > > > > > > + struct vhost_scsi *vs = tpg->vhost_scsi; > > > > > > > > + u32 reason; > > > > > > > > + > > > > > > > > + mutex_lock(&tpg->tv_tpg_mutex); > > > > > > > > + vs = tpg->vhost_scsi; > > > > > > > > + mutex_unlock(&tpg->tv_tpg_mutex); > > > > > > > > + if (!vs) > > > > > > > > + return -EOPNOTSUPP; > > > > > > > > > > > > > > Why EOPNOTSUPP? When does this happen? > > > > > > > > > > > > When tpg->vhost_scsi has not been setup. E.g. no one starts a vhost-scsi > > > > > > guest. > > > > > > > > > > So ENODEV or something. > > > > > > > > okay. > > > > > > > > > > > > + > > > > > > > > + if (!tcm_vhost_check_feature(vs, VIRTIO_SCSI_F_HOTPLUG)) > > > > > > > > + return -EOPNOTSUPP; > > > > > > > > + > > > > > > > > + if (plug) > > > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_RESCAN; > > > > > > > > + else > > > > > > > > + reason = VIRTIO_SCSI_EVT_RESET_REMOVED; > > > > > > > > + > > > > > > > > > > > > > > You have dropped the tpg lock so tpg->vhost_scsi can become > > > > > > > NULL now. Why is this safe? > > > > > > > > > > > > > > > + return tcm_vhost_send_evt(vs, tpg, lun, > > > > > > > > + VIRTIO_SCSI_T_TRANSPORT_RESET, > > > > > > > > + reason); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static int tcm_vhost_hotplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > > > +{ > > > > > > > > + return tcm_vhost_do_plug(tpg, lun, true); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static int tcm_vhost_hotunplug(struct tcm_vhost_tpg *tpg, struct se_lun *lun) > > > > > > > > +{ > > > > > > > > + return tcm_vhost_do_plug(tpg, lun, false); > > > > > > > > +} > > > > > > > > + > > > > > > > > > > > > > > What are these wrappers for? Seem useless ... > > > > > > > > > > > > > > > > > > It make people easier to understand what's the true and false is about > > > > > > in tcm_vhost_do_plug. > > > > > > > > > > So just pass in the reason and not a bool. > > > > > When readers see > > > > > VIRTIO_SCSI_EVT_RESET_RESCAN/VIRTIO_SCSI_EVT_RESET_REMOVED > > > > > they will know what is going on. > > > > > > > > How can you make the connection that VIRTIO_SCSI_EVT_RESET_RESCAN is > > > > hotplug and VIRTIO_SCSI_EVT_RESET_REMOVED is hot-unplug at first glance? > > > > > > > > > > > > static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > > > struct se_lun *lun) > > > > > > > > { > > > > > > > > @@ -1173,18 +1370,21 @@ static int tcm_vhost_port_link(struct se_portal_group *se_tpg, > > > > > > > > tv_tpg->tv_tpg_port_count++; > > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > > > > > > > > > + tcm_vhost_hotplug(tv_tpg, lun); > > > > > > > > + > > > > > > > > return 0; > > > > > > > > } > > > > > > > > > > > > > > > > static void tcm_vhost_port_unlink(struct se_portal_group *se_tpg, > > > > > > > > - struct se_lun *se_lun) > > > > > > > > + struct se_lun *lun) > > > > > > > > { > > > > > > > > struct tcm_vhost_tpg *tv_tpg = container_of(se_tpg, > > > > > > > > struct tcm_vhost_tpg, se_tpg); > > > > > > > > - > > > > > > > > mutex_lock(&tv_tpg->tv_tpg_mutex); > > > > > > > > tv_tpg->tv_tpg_port_count--; > > > > > > > > mutex_unlock(&tv_tpg->tv_tpg_mutex); > > > > > > > > + > > > > > > > > + tcm_vhost_hotunplug(tv_tpg, lun); > > > > > > > > } > > > > > > > > > > > > > > > > static struct se_node_acl *tcm_vhost_make_nodeacl( > > > > > > > > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h > > > > > > > > index 1d2ae7a..94e9ee53 100644 > > > > > > > > --- a/drivers/vhost/tcm_vhost.h > > > > > > > > +++ b/drivers/vhost/tcm_vhost.h > > > > > > > > @@ -53,6 +53,7 @@ struct tcm_vhost_nacl { > > > > > > > > struct se_node_acl se_node_acl; > > > > > > > > }; > > > > > > > > > > > > > > > > +struct vhost_scsi; > > > > > > > > struct tcm_vhost_tpg { > > > > > > > > /* Vhost port target portal group tag for TCM */ > > > > > > > > u16 tport_tpgt; > > > > > > > > @@ -70,6 +71,8 @@ struct tcm_vhost_tpg { > > > > > > > > struct tcm_vhost_tport *tport; > > > > > > > > /* Returned by tcm_vhost_make_tpg() */ > > > > > > > > struct se_portal_group se_tpg; > > > > > > > > + /* Pointer back to struct vhost_scsi, protected by tv_tpg_mutex */ > > > > > > > > + struct vhost_scsi *vhost_scsi; > > > > > > > > }; > > > > > > > > > > > > > > > > struct tcm_vhost_tport { > > > > > > > > @@ -83,6 +86,13 @@ struct tcm_vhost_tport { > > > > > > > > struct se_wwn tport_wwn; > > > > > > > > }; > > > > > > > > > > > > > > > > +struct tcm_vhost_evt { > > > > > > > > + /* virtio_scsi event */ > > > > > > > > + struct virtio_scsi_event event; > > > > > > > > + /* virtio_scsi event list, serviced from vhost worker thread */ > > > > > > > > + struct llist_node list; > > > > > > > > +}; > > > > > > > > + > > > > > > > > /* > > > > > > > > * As per request from MST, keep TCM_VHOST related ioctl defines out of > > > > > > > > * linux/vhost.h (user-space) for now.. > > > > > > > > -- > > > > > > > > 1.8.1.4 > > > > > > > > > > > > -- > > > > > > Asias > > > > > > > > -- > > > > Asias > > > > -- > > Asias -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-22 15:22 ` Asias He @ 2013-04-23 4:18 ` Rusty Russell 2013-04-23 4:45 ` Asias He 0 siblings, 1 reply; 37+ messages in thread From: Rusty Russell @ 2013-04-23 4:18 UTC (permalink / raw) To: Asias He, Michael S. Tsirkin Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini Asias He <asias@redhat.com> writes: > On Mon, Apr 22, 2013 at 04:17:04PM +0300, Michael S. Tsirkin wrote: >> > > > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); >> > > > > > > >> > > > > > > I think kzalloc not needed here, you init all fields. >> > > > > > >> > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. >> > > > > >> > > > > So that is 4 bytes just init them when you set rest of lun. >> > > > >> > > > It is not in the fast path. You can do it this way but not a must. >> > > >> > > I think that's a bit cleaner than relying on kzalloc to zero-initialize. >> > >> > I think it is a bit shorter, 4 lines saved. >> >> OTOH you don't have to think "what about the rest of the bytes?" then >> hunt through the code and go "ah, it's kzalloc". >> Looks it's a style issue but I prefer explicit initialization. > > It is just make none sense to argue about this. If you really want you > can make patches to remove all the lines where use kzalloc because it is > cleaner. I prefer explicit initialization too, FWIW. >> > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, >> > > > > > > > + struct tcm_vhost_evt *evt) >> > > > > > > > +{ >> > > > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; >> > > > > > > > + struct virtio_scsi_event *event = &evt->event; >> > > > > > > > + struct virtio_scsi_event __user *eventp; >> > > > > > > > + unsigned out, in; >> > > > > > > > + int head, ret; >> > > > > > > > + >> > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) >> > > > > > > > + return; >> > > > > > > > + >> > > > > > > > + mutex_lock(&vs->vs_events_lock); >> > > > > > > > + mutex_lock(&vq->mutex); >> > > > > > > > +again: >> > > > > > > > + vhost_disable_notify(&vs->dev, vq); >> > > > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, >> > > > > > > > + ARRAY_SIZE(vq->iov), &out, &in, >> > > > > > > > + NULL, NULL); >> > > > > > > > + if (head < 0) { >> > > > > > > > + vs->vs_events_dropped = true; >> > > > > > > > + goto out; >> > > > > > > > + } >> > > > > > > > + if (head == vq->num) { >> > > > > > > > + if (vhost_enable_notify(&vs->dev, vq)) >> > > > > > > > + goto again; >> > > > > > > >> > > > > > > Please write loops with while() or for(). >> > > > > > > Not with goto. goto is for error handling. >> > > > > > >> > > > > > This makes extra indention which is more ugly. >> > > > > >> > > > > I don't care. No loops with goto and that's a hard rule. >> > > > >> > > > It is not a loop. >> > > >> > > If same code repeats, it's a loop. >> > >> > We really need here is to call vhost_get_vq_desc once. It's not a loop >> > like other places to use while() or for() with vhost_get_vq_desc. >> >> Exactly. Like a mutex for example? We only need to >> lock it once. See __mutex_lock_common - uses a for loop, >> doesn't it? > > So what? > >> So fix this please, use while or for or even until. > > I do not think it is necessary. I tend to use again: in such corner cases, where I don't expect it to be invoked very often. A do loop looks too normal. I do exactly the same thing in virtio-net.c, for the same case. Would you like me to review the entire thing? Cheers, Rusty. ^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support 2013-04-23 4:18 ` Rusty Russell @ 2013-04-23 4:45 ` Asias He 0 siblings, 0 replies; 37+ messages in thread From: Asias He @ 2013-04-23 4:45 UTC (permalink / raw) To: Rusty Russell Cc: kvm, Michael S. Tsirkin, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Tue, Apr 23, 2013 at 01:48:51PM +0930, Rusty Russell wrote: > Asias He <asias@redhat.com> writes: > > On Mon, Apr 22, 2013 at 04:17:04PM +0300, Michael S. Tsirkin wrote: > >> > > > > > > > + evt = kzalloc(sizeof(*evt), GFP_KERNEL); > >> > > > > > > > >> > > > > > > I think kzalloc not needed here, you init all fields. > >> > > > > > > >> > > > > > Not really! evt->event.lun[4-7] is not initialized. It needs to be 0. > >> > > > > > >> > > > > So that is 4 bytes just init them when you set rest of lun. > >> > > > > >> > > > It is not in the fast path. You can do it this way but not a must. > >> > > > >> > > I think that's a bit cleaner than relying on kzalloc to zero-initialize. > >> > > >> > I think it is a bit shorter, 4 lines saved. > >> > >> OTOH you don't have to think "what about the rest of the bytes?" then > >> hunt through the code and go "ah, it's kzalloc". > >> Looks it's a style issue but I prefer explicit initialization. > > > > It is just make none sense to argue about this. If you really want you > > can make patches to remove all the lines where use kzalloc because it is > > cleaner. > > I prefer explicit initialization too, FWIW. I do not really care. I am OK with either way. > >> > > > > > > > +static void tcm_vhost_do_evt_work(struct vhost_scsi *vs, > >> > > > > > > > + struct tcm_vhost_evt *evt) > >> > > > > > > > +{ > >> > > > > > > > + struct vhost_virtqueue *vq = &vs->vqs[VHOST_SCSI_VQ_EVT]; > >> > > > > > > > + struct virtio_scsi_event *event = &evt->event; > >> > > > > > > > + struct virtio_scsi_event __user *eventp; > >> > > > > > > > + unsigned out, in; > >> > > > > > > > + int head, ret; > >> > > > > > > > + > >> > > > > > > > + if (!tcm_vhost_check_endpoint(vq)) > >> > > > > > > > + return; > >> > > > > > > > + > >> > > > > > > > + mutex_lock(&vs->vs_events_lock); > >> > > > > > > > + mutex_lock(&vq->mutex); > >> > > > > > > > +again: > >> > > > > > > > + vhost_disable_notify(&vs->dev, vq); > >> > > > > > > > + head = vhost_get_vq_desc(&vs->dev, vq, vq->iov, > >> > > > > > > > + ARRAY_SIZE(vq->iov), &out, &in, > >> > > > > > > > + NULL, NULL); > >> > > > > > > > + if (head < 0) { > >> > > > > > > > + vs->vs_events_dropped = true; > >> > > > > > > > + goto out; > >> > > > > > > > + } > >> > > > > > > > + if (head == vq->num) { > >> > > > > > > > + if (vhost_enable_notify(&vs->dev, vq)) > >> > > > > > > > + goto again; > >> > > > > > > > >> > > > > > > Please write loops with while() or for(). > >> > > > > > > Not with goto. goto is for error handling. > >> > > > > > > >> > > > > > This makes extra indention which is more ugly. > >> > > > > > >> > > > > I don't care. No loops with goto and that's a hard rule. > >> > > > > >> > > > It is not a loop. > >> > > > >> > > If same code repeats, it's a loop. > >> > > >> > We really need here is to call vhost_get_vq_desc once. It's not a loop > >> > like other places to use while() or for() with vhost_get_vq_desc. > >> > >> Exactly. Like a mutex for example? We only need to > >> lock it once. See __mutex_lock_common - uses a for loop, > >> doesn't it? > > > > So what? > > > >> So fix this please, use while or for or even until. > > > > I do not think it is necessary. > > I tend to use again: in such corner cases, where I don't expect it to > be invoked very often. A do loop looks too normal. > > I do exactly the same thing in virtio-net.c, for the same case. > > Would you like me to review the entire thing? Sure. Thanks Rusty! > Cheers, > Rusty. -- Asias ^ permalink raw reply [flat|nested] 37+ messages in thread
[parent not found: <1366247154-3136-1-git-send-email-asias@redhat.com>]
* Re: [PATCH v7 0/3] tcm_vhost hotplug [not found] ` <1366247154-3136-1-git-send-email-asias@redhat.com> @ 2013-04-18 7:36 ` Michael S. Tsirkin 0 siblings, 0 replies; 37+ messages in thread From: Michael S. Tsirkin @ 2013-04-18 7:36 UTC (permalink / raw) To: Asias He Cc: kvm, virtualization, target-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Apr 18, 2013 at 09:05:51AM +0800, Asias He wrote: > Changes in v7: > - Add vhost_work_flush for vs->vs_event_work to this series > > Asias He (3): > tcm_vhost: Introduce tcm_vhost_check_feature() > tcm_vhost: Add helper to check if endpoint is setup > tcm_vhost: Add hotplug/hotunplug support Okay it looks like I missed some issues in the previous review round. Sorry about that. Sent more comments now. They are kind of minor so if already applied, we can fix them incrementally I think. > drivers/vhost/tcm_vhost.c | 238 +++++++++++++++++++++++++++++++++++++++++++++- > drivers/vhost/tcm_vhost.h | 10 ++ > 2 files changed, 244 insertions(+), 4 deletions(-) > > -- > 1.8.1.4 ^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2013-04-23 4:45 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-09 8:51 [PATCH v5 0/3] tcm_vhost hotplug Asias He
2013-04-09 8:51 ` [PATCH v5 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
2013-04-09 8:51 ` [PATCH v5 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He
2013-04-09 8:51 ` [PATCH v5 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He
2013-04-12 23:10 ` [PATCH v5 0/3] tcm_vhost hotplug Nicholas A. Bellinger
[not found] ` <1365808250.26011.68.camel@haakon2.linux-iscsi.org>
2013-04-13 3:20 ` [PATCH v6 " Asias He
2013-04-13 3:20 ` [PATCH v6 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
2013-04-13 3:20 ` [PATCH v6 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He
2013-04-13 3:20 ` [PATCH v6 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He
2013-04-14 8:53 ` [PATCH v5 0/3] tcm_vhost hotplug Michael S. Tsirkin
2013-04-17 13:17 ` Michael S. Tsirkin
2013-04-18 1:05 ` [PATCH v7 " Asias He
2013-04-18 1:05 ` [PATCH v7 1/3] tcm_vhost: Introduce tcm_vhost_check_feature() Asias He
2013-04-18 7:06 ` Michael S. Tsirkin
[not found] ` <20130418070606.GC13787@redhat.com>
2013-04-18 8:25 ` Asias He
2013-04-18 7:44 ` Michael S. Tsirkin
2013-04-18 1:05 ` [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup Asias He
2013-04-18 1:05 ` [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support Asias He
2013-04-18 1:09 ` [PATCH v5 0/3] tcm_vhost hotplug Asias He
[not found] ` <1366247154-3136-3-git-send-email-asias@redhat.com>
2013-04-18 7:09 ` [PATCH v7 2/3] tcm_vhost: Add helper to check if endpoint is setup Michael S. Tsirkin
2013-04-18 8:32 ` Asias He
2013-04-18 7:38 ` Michael S. Tsirkin
2013-04-22 8:53 ` Asias He
[not found] ` <20130422085327.GA21692@hj.localdomain>
2013-04-22 13:28 ` Michael S. Tsirkin
2013-04-22 15:00 ` Asias He
2013-04-22 16:21 ` Michael S. Tsirkin
[not found] ` <1366247154-3136-4-git-send-email-asias@redhat.com>
2013-04-18 7:34 ` [PATCH v7 3/3] tcm_vhost: Add hotplug/hotunplug support Michael S. Tsirkin
2013-04-18 8:59 ` Asias He
2013-04-18 8:21 ` Michael S. Tsirkin
2013-04-19 2:34 ` Asias He
2013-04-20 19:01 ` Michael S. Tsirkin
2013-04-22 9:20 ` Asias He
2013-04-22 13:17 ` Michael S. Tsirkin
2013-04-22 15:22 ` Asias He
2013-04-23 4:18 ` Rusty Russell
2013-04-23 4:45 ` Asias He
[not found] ` <1366247154-3136-1-git-send-email-asias@redhat.com>
2013-04-18 7:36 ` [PATCH v7 0/3] tcm_vhost hotplug Michael S. Tsirkin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).