From: Christian Schoenebeck <linux_oss@crudebyte.com>
To: v9fs-developer@lists.sourceforge.net
Cc: netdev@vger.kernel.org,
Dominique Martinet <asmadeus@codewreck.org>,
Eric Van Hensbergen <ericvh@gmail.com>,
Latchesar Ionkov <lucho@ionkov.net>, Greg Kurz <groug@kaod.org>,
Vivek Goyal <vgoyal@redhat.com>
Subject: [PATCH v2 7/7] 9p/trans_virtio: resize sg lists to whatever is possible
Date: Mon, 20 Sep 2021 18:44:20 +0200 [thread overview]
Message-ID: <0a2c16b9acf580a679f38354b5d60a68c5fb6c99.1632156835.git.linux_oss@crudebyte.com> (raw)
In-Reply-To: <cover.1632156835.git.linux_oss@crudebyte.com>
Right now vq_sg_resize() used a lazy implementation following
the all-or-nothing princible. So it either resized exactly to
the requested new amount of sg lists, or it did not resize at
all.
The problem with this is if a user supplies a very large msize
value, resize would simply fail and the user would stick to
the default maximum msize supported by the virtio transport.
To resolve this potential issue, change vq_sg_resize() to resize
the passed sg list to whatever is possible on the machine.
Signed-off-by: Christian Schoenebeck <linux_oss@crudebyte.com>
---
net/9p/trans_virtio.c | 65 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 55 insertions(+), 10 deletions(-)
diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
index 4cb75f45aa15..b9bab7ed2768 100644
--- a/net/9p/trans_virtio.c
+++ b/net/9p/trans_virtio.c
@@ -205,24 +205,66 @@ static struct virtqueue_sg *vq_sg_alloc(unsigned int nsgl)
* amount of lists
* @_vq_sg: scatter/gather lists to be resized
* @nsgl: new amount of scatter/gather lists
+ *
+ * Old scatter/gather lists are retained. Only growing the size is supported.
+ * If the requested amount cannot be satisfied, then lists are increased to
+ * whatever is possible.
*/
static int vq_sg_resize(struct virtqueue_sg **_vq_sg, unsigned int nsgl)
{
struct virtqueue_sg *vq_sg;
+ unsigned int i;
+ size_t sz;
+ int ret = 0;
BUG_ON(!_vq_sg || !nsgl);
vq_sg = *_vq_sg;
+ if (nsgl > VIRTQUEUE_SG_NSGL_MAX)
+ nsgl = VIRTQUEUE_SG_NSGL_MAX;
if (vq_sg->nsgl == nsgl)
return 0;
+ if (vq_sg->nsgl > nsgl)
+ return -ENOTSUPP;
+
+ vq_sg = kzalloc(sizeof(struct virtqueue_sg) +
+ nsgl * sizeof(struct scatterlist *),
+ GFP_KERNEL);
- /* lazy resize implementation for now */
- vq_sg = vq_sg_alloc(nsgl);
if (!vq_sg)
return -ENOMEM;
- kfree(*_vq_sg);
+ /* copy over old scatter gather lists */
+ sz = sizeof(struct virtqueue_sg) +
+ (*_vq_sg)->nsgl * sizeof(struct scatterlist *);
+ memcpy(vq_sg, *_vq_sg, sz);
+
+ vq_sg->nsgl = nsgl;
+
+ for (i = (*_vq_sg)->nsgl; i < nsgl; ++i) {
+ vq_sg->sgl[i] = kmalloc_array(
+ SG_MAX_SINGLE_ALLOC, sizeof(struct scatterlist),
+ GFP_KERNEL
+ );
+ /*
+ * handle failed allocation as soft error, we take whatever
+ * we get
+ */
+ if (!vq_sg->sgl[i]) {
+ ret = -ENOMEM;
+ vq_sg->nsgl = nsgl = i;
+ break;
+ }
+ sg_init_table(vq_sg->sgl[i], SG_MAX_SINGLE_ALLOC);
+ if (i) {
+ /* chain the lists */
+ sg_chain(vq_sg->sgl[i - 1], SG_MAX_SINGLE_ALLOC,
+ vq_sg->sgl[i]);
+ }
+ }
+ sg_mark_end(&vq_sg->sgl[nsgl - 1][SG_MAX_SINGLE_ALLOC - 1]);
+
*_vq_sg = vq_sg;
- return 0;
+ return ret;
}
/**
@@ -839,13 +881,16 @@ p9_virtio_create(struct p9_client *client, const char *devname, char *args)
if (nsgl > chan->vq_sg->nsgl) {
/*
* if resize fails, no big deal, then just
- * continue with default msize instead
+ * continue with whatever we got
*/
- if (!vq_sg_resize(&chan->vq_sg, nsgl)) {
- client->trans_maxsize =
- PAGE_SIZE *
- ((nsgl * SG_USER_PAGES_PER_LIST) - 3);
- }
+ vq_sg_resize(&chan->vq_sg, nsgl);
+ /*
+ * actual allocation size might be less than
+ * requested, so use vq_sg->nsgl instead of nsgl
+ */
+ client->trans_maxsize =
+ PAGE_SIZE * ((chan->vq_sg->nsgl *
+ SG_USER_PAGES_PER_LIST) - 3);
}
#endif /* CONFIG_ARCH_NO_SG_CHAIN */
}
--
2.20.1
next prev parent reply other threads:[~2021-09-20 18:15 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-20 16:43 [PATCH v2 0/7] net/9p: remove msize limit in virtio transport Christian Schoenebeck
2021-09-20 16:43 ` [PATCH v2 1/7] net/9p: show error message if user 'msize' cannot be satisfied Christian Schoenebeck
2021-09-20 16:43 ` [PATCH v2 2/7] 9p/trans_virtio: separate allocation of scatter gather list Christian Schoenebeck
2021-09-20 16:43 ` [PATCH v2 3/7] 9p/trans_virtio: turn amount of sg lists into runtime info Christian Schoenebeck
2021-09-20 16:44 ` [PATCH v2 4/7] 9p/trans_virtio: introduce struct virtqueue_sg Christian Schoenebeck
2021-09-22 11:25 ` Christian Schoenebeck
2021-09-20 16:44 ` [PATCH v2 5/7] net/9p: add trans_maxsize to struct p9_client Christian Schoenebeck
2021-09-20 16:44 ` [PATCH v2 6/7] 9p/trans_virtio: support larger msize values Christian Schoenebeck
2021-09-22 11:42 ` Christian Schoenebeck
2021-09-20 16:44 ` Christian Schoenebeck [this message]
2021-09-22 11:56 ` [PATCH v2 7/7] 9p/trans_virtio: resize sg lists to whatever is possible Christian Schoenebeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0a2c16b9acf580a679f38354b5d60a68c5fb6c99.1632156835.git.linux_oss@crudebyte.com \
--to=linux_oss@crudebyte.com \
--cc=asmadeus@codewreck.org \
--cc=ericvh@gmail.com \
--cc=groug@kaod.org \
--cc=lucho@ionkov.net \
--cc=netdev@vger.kernel.org \
--cc=v9fs-developer@lists.sourceforge.net \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).