From: Mike Christie <michael.christie@oracle.com>
To: sgarzare@redhat.com, stefanha@redhat.com,
linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com,
virtualization@lists.linux-foundation.org
Subject: [RFC PATCH 7/8] vhost, vhost-scsi: flush IO vqs then send TMF rsp
Date: Fri, 4 Dec 2020 01:56:32 -0600 [thread overview]
Message-ID: <1607068593-16932-8-git-send-email-michael.christie@oracle.com> (raw)
In-Reply-To: <1607068593-16932-1-git-send-email-michael.christie@oracle.com>
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first which
calls vhost_scsi_release_cmd to add them to the work queue.
When the next patch adds multiple worker support, the worker threads
could still be sending their responses when the tmf's work is run.
So this patch has vhost-scsi flush the IO vqs on other worker threads
before we send the tmf response.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
---
drivers/vhost/scsi.c | 20 ++++++++++++++++++--
drivers/vhost/vhost.c | 9 +++++++++
drivers/vhost/vhost.h | 1 +
3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 08bc513..8005a7f 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1179,12 +1179,28 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
{
struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf,
vwork);
+ struct vhost_virtqueue *vq;
+ unsigned int cpu;
int resp_code;
+ int i;
- if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE)
+ if (tmf->scsi_resp == TMR_FUNCTION_COMPLETE) {
+ /*
+ * When processing a TMF, lio completes the cmds then the
+ * TMF, so with one worker the TMF always completes after
+ * cmds. For multiple worker support, we must flush every
+ * worker that runs on a different cpu than the EVT vq.
+ */
+ cpu = tmf->vhost->vqs[VHOST_SCSI_VQ_CTL].vq.cpu;
+ for (i = VHOST_SCSI_VQ_IO; i < tmf->vhost->dev.nvqs; i++) {
+ vq = &tmf->vhost->vqs[i].vq;
+ if (cpu != vq->cpu)
+ vhost_vq_work_flush(vq);
+ }
resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED;
- else
+ } else {
resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+ }
vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
tmf->vq_desc, &tmf->resp_iov, resp_code);
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index f425d0f..4aae504 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -250,6 +250,15 @@ void vhost_work_dev_flush(struct vhost_dev *dev)
}
EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
+void vhost_vq_work_flush(struct vhost_virtqueue *vq)
+{
+ if (vq->cpu != -1)
+ flush_work(&vq->work);
+ else
+ vhost_work_dev_flush(vq->dev);
+}
+EXPORT_SYMBOL_GPL(vhost_vq_work_flush);
+
/* Flush any work that has been scheduled. When calling this, don't hold any
* locks that are also used by the callback. */
void vhost_poll_flush(struct vhost_poll *poll)
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 28ff4a2..2d306f8 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -40,6 +40,7 @@ struct vhost_poll {
void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
bool vhost_has_work(struct vhost_dev *dev);
void vhost_vq_work_queue(struct vhost_virtqueue *vq, struct vhost_work *work);
+void vhost_vq_work_flush(struct vhost_virtqueue *vq);
void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
__poll_t mask, struct vhost_dev *dev,
--
1.8.3.1
next prev parent reply other threads:[~2020-12-04 7:57 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-04 7:56 [RFC PATCH 0/8] vhost: allow userspace to control vq cpu affinity Mike Christie
2020-12-04 7:56 ` [RFC PATCH 1/8] vhost: remove work arg from vhost_work_flush Mike Christie
2020-12-04 7:56 ` [RFC PATCH 2/8] vhost-scsi: remove extra flushes Mike Christie
2020-12-04 7:56 ` [RFC PATCH 3/8] vhost poll: fix coding style Mike Christie
2020-12-04 7:56 ` [RFC PATCH 4/8] vhost: move msg_handler to new ops struct Mike Christie
2020-12-04 7:56 ` [RFC PATCH 5/8] vhost: allow userspace to bind vqs to CPUs Mike Christie
2020-12-04 8:09 ` Jason Wang
2020-12-04 16:32 ` Mike Christie
2020-12-07 4:27 ` Jason Wang
2020-12-07 18:31 ` Mike Christie
2020-12-08 2:30 ` Jason Wang
2020-12-04 7:56 ` [RFC PATCH 6/8] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2020-12-04 7:56 ` Mike Christie [this message]
2020-12-04 7:56 ` [RFC PATCH 8/8] vhost-scsi: hook vhost-scsi into vring set cpu support Mike Christie
2020-12-04 16:06 ` [RFC PATCH 0/8] vhost: allow userspace to control vq cpu affinity Stefano Garzarella
2020-12-04 17:10 ` Mike Christie
2020-12-04 17:33 ` Mike Christie
2020-12-09 15:58 ` Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1607068593-16932-8-git-send-email-michael.christie@oracle.com \
--to=michael.christie@oracle.com \
--cc=jasowang@redhat.com \
--cc=linux-scsi@vger.kernel.org \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=target-devel@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox