From: "Michael S. Tsirkin" <mst@redhat.com>
To: Gonglei <arei.gonglei@huawei.com>
Cc: qemu-devel@nongnu.org, virtio-dev@lists.oasis-open.org,
luonengjun@huawei.com, stefanha@redhat.com, pbonzini@redhat.com,
berrange@redhat.com, weidong.huang@huawei.com,
wu.wubin@huawei.com, mike.caraman@nxp.com, agraf@suse.de,
xin.zeng@intel.com, claudio.fontana@huawei.com, nmorey@kalray.eu,
vincent.jardin@6wind.com, jianjay.zhou@huawei.com,
hanweidong@huawei.com, peter.huangpeng@huawei.com,
arei.gonglei@hotmail.com, eblake@redhat.com
Subject: Re: [Qemu-devel] [PATCH v9 09/12] virtio-crypto: add data queue processing handler
Date: Thu, 27 Oct 2016 19:59:13 +0300 [thread overview]
Message-ID: <20161027191443-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1476963957-176296-10-git-send-email-arei.gonglei@huawei.com>
On Thu, Oct 20, 2016 at 07:45:54PM +0800, Gonglei wrote:
> Introduces VirtIOCryptoReq structure to store
> crypto request so that we can easily support
> asynchronous crypto operation in the future.
>
> At present, we only support cipher and algorithm
> chaining.
>
> Signed-off-by: Gonglei <arei.gonglei@huawei.com>
> ---
> hw/virtio/virtio-crypto.c | 358 +++++++++++++++++++++++++++++++++++++-
> include/hw/virtio/virtio-crypto.h | 4 +
> 2 files changed, 361 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
> index 4be65e0..eabc61f 100644
> --- a/hw/virtio/virtio-crypto.c
> +++ b/hw/virtio/virtio-crypto.c
> @@ -311,6 +311,362 @@ static void virtio_crypto_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> } /* end for loop */
> }
>
> +static void virtio_crypto_init_request(VirtIOCrypto *vcrypto, VirtQueue *vq,
> + VirtIOCryptoReq *req)
> +{
> + req->vcrypto = vcrypto;
> + req->vq = vq;
> + req->in = NULL;
> + req->in_iov = NULL;
> + req->in_num = 0;
> + req->in_len = 0;
> + req->flags = CRYPTODEV_BACKEND_ALG__MAX;
> + req->u.sym_op_info = NULL;
> +}
> +
> +static void virtio_crypto_free_request(VirtIOCryptoReq *req)
> +{
> + if (req) {
> + if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) {
> + g_free(req->u.sym_op_info);
> + }
> + g_free(req);
> + }
> +}
> +
> +static void
> +virtio_crypto_sym_input_data_helper(VirtIODevice *vdev,
> + VirtIOCryptoReq *req,
> + uint32_t status,
> + CryptoDevBackendSymOpInfo *sym_op_info)
> +{
> + size_t s, len;
> +
> + if (status != VIRTIO_CRYPTO_OK) {
> + return;
> + }
> +
> + len = sym_op_info->dst_len;
> + /* Save the cipher result */
> + s = iov_from_buf(req->in_iov, req->in_num, 0, sym_op_info->dst, len);
> + if (s != len) {
> + virtio_error(vdev, "virtio-crypto dest data incorrect");
> + return;
> + }
> +
> + iov_discard_front(&req->in_iov, &req->in_num, len);
> +
> + if (sym_op_info->op_type ==
> + VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) {
> + /* Save the digest result */
> + s = iov_from_buf(req->in_iov, req->in_num, 0,
> + sym_op_info->digest_result,
> + sym_op_info->digest_result_len);
> + if (s != sym_op_info->digest_result_len) {
> + virtio_error(vdev, "virtio-crypto digest result incorrect");
> + }
> + }
> +}
> +
> +static void virtio_crypto_req_complete(VirtIOCryptoReq *req, uint8_t status)
> +{
> + VirtIOCrypto *vcrypto = req->vcrypto;
> + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto);
> +
> + if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) {
> + virtio_crypto_sym_input_data_helper(vdev, req, status,
> + req->u.sym_op_info);
> + }
> + stb_p(&req->in->status, status);
> + virtqueue_push(req->vq, &req->elem, req->in_len);
> + virtio_notify(vdev, req->vq);
> +}
> +
> +static VirtIOCryptoReq *
> +virtio_crypto_get_request(VirtIOCrypto *s, VirtQueue *vq)
> +{
> + VirtIOCryptoReq *req = virtqueue_pop(vq, sizeof(VirtIOCryptoReq));
> +
> + if (req) {
> + virtio_crypto_init_request(s, vq, req);
> + }
> + return req;
> +}
> +
> +static CryptoDevBackendSymOpInfo *
> +virtio_crypto_sym_op_helper(VirtIODevice *vdev,
> + struct virtio_crypto_cipher_para *cipher_para,
> + struct virtio_crypto_alg_chain_data_para *alg_chain_para,
> + struct iovec *iov, unsigned int out_num)
> +{
> + CryptoDevBackendSymOpInfo *op_info;
> + uint32_t src_len = 0, dst_len = 0;
> + uint32_t iv_len = 0;
> + uint32_t aad_len = 0, hash_result_len = 0;
> + uint32_t hash_start_src_offset = 0, len_to_hash = 0;
> + uint32_t cipher_start_src_offset = 0, len_to_cipher = 0;
> +
> + size_t max_len, curr_size = 0;
> + size_t s;
> +
> + /* Plain cipher */
> + if (cipher_para) {
> + iv_len = virtio_ldl_p(vdev, &cipher_para->iv_len);
> + src_len = virtio_ldl_p(vdev, &cipher_para->src_data_len);
> + dst_len = virtio_ldl_p(vdev, &cipher_para->dst_data_len);
> + } else if (alg_chain_para) { /* Algorithm chain */
> + iv_len = virtio_ldl_p(vdev, &alg_chain_para->iv_len);
> + src_len = virtio_ldl_p(vdev, &alg_chain_para->src_data_len);
> + dst_len = virtio_ldl_p(vdev, &alg_chain_para->dst_data_len);
> +
> + aad_len = virtio_ldl_p(vdev, &alg_chain_para->aad_len);
> + hash_result_len = virtio_ldl_p(vdev,
> + &alg_chain_para->hash_result_len);
> + hash_start_src_offset = virtio_ldl_p(vdev,
> + &alg_chain_para->hash_start_src_offset);
> + cipher_start_src_offset = virtio_ldl_p(vdev,
> + &alg_chain_para->cipher_start_src_offset);
> + len_to_cipher = virtio_ldl_p(vdev, &alg_chain_para->len_to_cipher);
> + len_to_hash = virtio_ldl_p(vdev, &alg_chain_para->len_to_hash);
> + } else {
> + return NULL;
> + }
> +
You don't need virtio_ wrappers for modern devices I think. Just use LE
accessors.
> + max_len = iv_len + aad_len + src_len + dst_len + hash_result_len;
> + if (unlikely(max_len > LONG_MAX - sizeof(CryptoDevBackendSymOpInfo))) {
> + virtio_error(vdev, "virtio-crypto too big length");
> + return NULL;
> + }
> +
> + op_info = g_malloc0(sizeof(CryptoDevBackendSymOpInfo) + max_len);
Wow this can allocate up to 2^64 bytes, triggerable by guest. Not nice.
Also, max size depends on long size and guest has no way
to know it. Not nice either.
Add max size in device config?
> + op_info->iv_len = iv_len;
> + op_info->src_len = src_len;
> + op_info->dst_len = dst_len;
> + op_info->aad_len = aad_len;
> + op_info->digest_result_len = hash_result_len;
> + op_info->hash_start_src_offset = hash_start_src_offset;
> + op_info->len_to_hash = len_to_hash;
> + op_info->cipher_start_src_offset = cipher_start_src_offset;
> + op_info->len_to_cipher = len_to_cipher;
> + /* Handle the initilization vector */
> + if (op_info->iv_len > 0) {
> + DPRINTF("iv_len=%" PRIu32 "\n", op_info->iv_len);
> + op_info->iv = op_info->data + curr_size;
This shows that splitting a single file like this
is not helpful. I wanted to know how is data initialized
and what makes this math safe, and couldn't figure it out
from this patch alone.
> +
> + s = iov_to_buf(iov, out_num, 0, op_info->iv, op_info->iv_len);
> + if (unlikely(s != op_info->iv_len)) {
> + virtio_error(vdev, "virtio-crypto iv incorrect");
> + goto err;
> + }
> + iov_discard_front(&iov, &out_num, op_info->iv_len);
> + curr_size += op_info->iv_len;
> + }
> +
> + /* Handle additional authentication data if exist */
-> if it exists
> + if (op_info->aad_len > 0) {
> + DPRINTF("aad_len=%" PRIu32 "\n", op_info->aad_len);
> + op_info->aad_data = op_info->data + curr_size;
> +
> + s = iov_to_buf(iov, out_num, 0, op_info->aad_data, op_info->aad_len);
> + if (unlikely(s != op_info->aad_len)) {
> + virtio_error(vdev, "virtio-crypto additional auth data incorrect");
> + goto err;
> + }
> + iov_discard_front(&iov, &out_num, op_info->aad_len);
> +
> + curr_size += op_info->aad_len;
> + }
> +
> + /* Handle the source data */
> + if (op_info->src_len > 0) {
> + DPRINTF("src_len=%" PRIu32 "\n", op_info->src_len);
> + op_info->src = op_info->data + curr_size;
> +
> + s = iov_to_buf(iov, out_num, 0, op_info->src, op_info->src_len);
> + if (unlikely(s != op_info->src_len)) {
> + virtio_error(vdev, "virtio-crypto source data incorrect");
> + goto err;
> + }
> + iov_discard_front(&iov, &out_num, op_info->src_len);
> +
> + curr_size += op_info->src_len;
> + }
> +
> + /* Handle the destination data */
> + op_info->dst = op_info->data + curr_size;
> + curr_size += op_info->dst_len;
> +
> + DPRINTF("dst_len=%" PRIu32 "\n", op_info->dst_len);
> +
> + /* Handle the hash digest result */
> + if (hash_result_len > 0) {
> + DPRINTF("hash_result_len=%" PRIu32 "\n", hash_result_len);
> + op_info->digest_result = op_info->data + curr_size;
> + }
> +
> + return op_info;
> +
> +err:
> + g_free(op_info);
> + return NULL;
> +}
> +
> +static int
> +virtio_crypto_handle_sym_req(VirtIOCrypto *vcrypto,
> + struct virtio_crypto_sym_data_req *req,
> + CryptoDevBackendSymOpInfo **sym_op_info,
> + struct iovec *iov, unsigned int out_num)
> +{
> + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto);
> + uint32_t op_type;
> + CryptoDevBackendSymOpInfo *op_info;
> +
> + op_type = virtio_ldl_p(vdev, &req->op_type);
> +
> + if (op_type == VIRTIO_CRYPTO_SYM_OP_CIPHER) {
> + op_info = virtio_crypto_sym_op_helper(vdev, &req->u.cipher.para,
> + NULL, iov, out_num);
> + if (!op_info) {
> + return -EFAULT;
> + }
> + op_info->op_type = op_type;
> + } else if (op_type == VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) {
> + op_info = virtio_crypto_sym_op_helper(vdev, NULL,
> + &req->u.chain.para,
> + iov, out_num);
> + if (!op_info) {
> + return -EFAULT;
> + }
> + op_info->op_type = op_type;
> + } else {
> + /* VIRTIO_CRYPTO_SYM_OP_NONE */
> + error_report("virtio-crypto unsupported cipher type");
> + return -VIRTIO_CRYPTO_NOTSUPP;
virtio_error?
> + }
> +
> + *sym_op_info = op_info;
> +
> + return 0;
> +}
> +
> +static int
> +virtio_crypto_handle_request(VirtIOCryptoReq *request)
> +{
> + VirtIOCrypto *vcrypto = request->vcrypto;
> + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto);
> + VirtQueueElement *elem = &request->elem;
> + int queue_index = virtio_crypto_vq2q(virtio_get_queue_index(request->vq));
> + struct virtio_crypto_op_data_req req;
> + int ret;
> + struct iovec *in_iov;
> + struct iovec *out_iov;
> + unsigned in_num;
> + unsigned out_num;
> + uint32_t opcode;
> + uint8_t status = VIRTIO_CRYPTO_ERR;
> + uint64_t session_id;
> + CryptoDevBackendSymOpInfo *sym_op_info = NULL;
> + Error *local_err = NULL;
> +
> + if (elem->out_num < 1 || elem->in_num < 1) {
> + virtio_error(vdev, "virtio-crypto dataq missing headers");
> + return -1;
> + }
> +
> + out_num = elem->out_num;
> + out_iov = elem->out_sg;
> + in_num = elem->in_num;
> + in_iov = elem->in_sg;
> + if (unlikely(iov_to_buf(out_iov, out_num, 0, &req, sizeof(req))
> + != sizeof(req))) {
> + virtio_error(vdev, "virtio-crypto request outhdr too short");
> + return -1;
> + }
> + iov_discard_front(&out_iov, &out_num, sizeof(req));
> +
> + if (in_iov[in_num - 1].iov_len <
> + sizeof(struct virtio_crypto_inhdr)) {
> + virtio_error(vdev, "virtio-crypto request inhdr too short");
> + return -1;
> + }
> + /* We always touch the last byte, so just see how big in_iov is. */
> + request->in_len = iov_size(in_iov, in_num);
> + request->in = (void *)in_iov[in_num - 1].iov_base
> + + in_iov[in_num - 1].iov_len
> + - sizeof(struct virtio_crypto_inhdr);
> + iov_discard_back(in_iov, &in_num, sizeof(struct virtio_crypto_inhdr));
> +
> + /*
> + * The length of operation result, including dest_data
> + * and digest_result if exist.
> + */
> + request->in_num = in_num;
> + request->in_iov = in_iov;
> +
> + opcode = virtio_ldl_p(vdev, &req.header.opcode);
> + session_id = virtio_ldq_p(vdev, &req.header.session_id);
> +
> + switch (opcode) {
> + case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
> + case VIRTIO_CRYPTO_CIPHER_DECRYPT:
> + ret = virtio_crypto_handle_sym_req(vcrypto,
> + &req.u.sym_req,
> + &sym_op_info,
> + out_iov, out_num);
> + /* Serious errors, need to reset virtio crypto device */
> + if (ret == -EFAULT) {
> + return -1;
> + } else if (ret == -VIRTIO_CRYPTO_NOTSUPP) {
> + virtio_crypto_req_complete(request, VIRTIO_CRYPTO_NOTSUPP);
> + virtio_crypto_free_request(request);
> + } else {
> + sym_op_info->session_id = session_id;
> +
> + /* Set request's parameter */
> + request->flags = CRYPTODEV_BACKEND_ALG_SYM;
> + request->u.sym_op_info = sym_op_info;
> + ret = cryptodev_backend_sym_operation(vcrypto->cryptodev,
> + sym_op_info, queue_index, &local_err);
> + if (ret < 0) {
> + status = VIRTIO_CRYPTO_ERR;
> + if (local_err) {
> + error_report_err(local_err);
> + }
> + } else { /* ret >= 0 */
> + status = VIRTIO_CRYPTO_OK;
> + }
> + virtio_crypto_req_complete(request, status);
> + virtio_crypto_free_request(request);
> + }
> + break;
> + case VIRTIO_CRYPTO_HASH:
> + case VIRTIO_CRYPTO_MAC:
> + case VIRTIO_CRYPTO_AEAD_ENCRYPT:
> + case VIRTIO_CRYPTO_AEAD_DECRYPT:
> + default:
> + error_report("virtio-crypto unsupported dataq opcode: %u",
> + opcode);
> + virtio_crypto_req_complete(request, VIRTIO_CRYPTO_NOTSUPP);
> + virtio_crypto_free_request(request);
> + }
> +
> + return 0;
> +}
> +
> +static void virtio_crypto_handle_dataq(VirtIODevice *vdev, VirtQueue *vq)
> +{
> + VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(vdev);
> + VirtIOCryptoReq *req;
> +
> + while ((req = virtio_crypto_get_request(vcrypto, vq))) {
> + if (virtio_crypto_handle_request(req) < 0) {
> + virtqueue_detach_element(req->vq, &req->elem, 0);
> + virtio_crypto_free_request(req);
> + break;
> + }
> + }
> +}
> +
> static uint64_t virtio_crypto_get_features(VirtIODevice *vdev,
> uint64_t features,
> Error **errp)
> @@ -370,7 +726,7 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
> vcrypto->curr_queues = 1;
>
> for (i = 0; i < vcrypto->max_queues; i++) {
> - virtio_add_queue(vdev, 1024, NULL);
> + virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq);
> }
>
> vcrypto->ctrl_vq = virtio_add_queue(vdev, 64, virtio_crypto_handle_ctrl);
> diff --git a/include/hw/virtio/virtio-crypto.h b/include/hw/virtio/virtio-crypto.h
> index 4a4b3da..2628056 100644
> --- a/include/hw/virtio/virtio-crypto.h
> +++ b/include/hw/virtio/virtio-crypto.h
> @@ -58,6 +58,10 @@ typedef struct VirtIOCryptoReq {
> VirtQueueElement elem;
> /* flags of operation, such as type of algorithm */
> uint32_t flags;
> + struct virtio_crypto_inhdr *in;
> + struct iovec *in_iov; /* Head address of dest iovec */
> + unsigned int in_num; /* Number of dest iovec */
> + size_t in_len;
> VirtQueue *vq;
> struct VirtIOCrypto *vcrypto;
> union {
> --
> 1.8.3.1
>
next prev parent reply other threads:[~2016-10-27 16:59 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-20 11:45 [Qemu-devel] [PATCH v9 00/12] virtio-crypto: introduce framework and device emulation Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 01/12] cryptodev: introduce cryptodev backend interface Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 02/12] cryptodev: add symmetric algorithm operation stuff Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 03/12] virtio-crypto: introduce virtio_crypto.h Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 04/12] cryptodev: introduce a new cryptodev backend Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 05/12] virtio-crypto: add virtio crypto device emulation Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 06/12] virtio-crypto-pci: add virtio crypto pci support Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 07/12] virtio-crypto: set capacity of algorithms supported Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 08/12] virtio-crypto: add control queue handler Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 09/12] virtio-crypto: add data queue processing handler Gonglei
2016-10-27 16:59 ` Michael S. Tsirkin [this message]
2016-10-28 0:50 ` [Qemu-devel] [virtio-dev] " Gonglei (Arei)
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 10/12] cryptodev: introduce an unified wrapper for crypto operation Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 11/12] virtio-crypto: using bh to handle dataq's requests Gonglei
2016-10-20 11:45 ` [Qemu-devel] [PATCH v9 12/12] virtio-crypto: add myself as virtio-crypto and cryptodev backends maintainer Gonglei
2016-10-20 12:25 ` [Qemu-devel] [PATCH v9 00/12] virtio-crypto: introduce framework and device emulation no-reply
2016-10-27 8:03 ` Gonglei (Arei)
2016-10-25 11:20 ` Gonglei (Arei)
2016-10-25 16:50 ` Michael S. Tsirkin
2016-10-26 0:54 ` [Qemu-devel] [virtio-dev] " Gonglei (Arei)
2016-10-27 16:12 ` Michael S. Tsirkin
2016-10-30 20:00 ` [Qemu-devel] " Halil Pasic
2016-10-31 2:52 ` Gonglei (Arei)
2016-10-31 9:41 ` Cornelia Huck
2016-11-02 17:34 ` Halil Pasic
2016-11-04 2:49 ` Gonglei (Arei)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161027191443-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=agraf@suse.de \
--cc=arei.gonglei@hotmail.com \
--cc=arei.gonglei@huawei.com \
--cc=berrange@redhat.com \
--cc=claudio.fontana@huawei.com \
--cc=eblake@redhat.com \
--cc=hanweidong@huawei.com \
--cc=jianjay.zhou@huawei.com \
--cc=luonengjun@huawei.com \
--cc=mike.caraman@nxp.com \
--cc=nmorey@kalray.eu \
--cc=pbonzini@redhat.com \
--cc=peter.huangpeng@huawei.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vincent.jardin@6wind.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=weidong.huang@huawei.com \
--cc=wu.wubin@huawei.com \
--cc=xin.zeng@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).