From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Ming Lei <ming.lei@redhat.com>,
stable@vger.kernel.org, Mark Ray <mark.ray@hpe.com>
Subject: [PATCH] blk-mq: avoid sysfs buffer overflow by too many CPU cores
Date: Thu, 15 Aug 2019 20:15:18 +0800 [thread overview]
Message-ID: <20190815121518.16675-1-ming.lei@redhat.com> (raw)
It is reported that sysfs buffer overflow can be triggered in case
of too many CPU cores(>841 on 4K PAGE_SIZE) when showing CPUs in
one hctx.
So use snprintf for avoiding the potential buffer overflow.
Cc: stable@vger.kernel.org
Cc: Mark Ray <mark.ray@hpe.com>
Fixes: 676141e48af7("blk-mq: don't dump CPU -> hw queue map on driver load")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
block/blk-mq-sysfs.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index d6e1a9bd7131..e75f41a98415 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -164,22 +164,28 @@ static ssize_t blk_mq_hw_sysfs_nr_reserved_tags_show(struct blk_mq_hw_ctx *hctx,
return sprintf(page, "%u\n", hctx->tags->nr_reserved_tags);
}
+/* avoid overflow by too many CPU cores */
static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
{
- unsigned int i, first = 1;
- ssize_t ret = 0;
-
- for_each_cpu(i, hctx->cpumask) {
- if (first)
- ret += sprintf(ret + page, "%u", i);
- else
- ret += sprintf(ret + page, ", %u", i);
-
- first = 0;
+ unsigned int cpu = cpumask_first(hctx->cpumask);
+ ssize_t len = snprintf(page, PAGE_SIZE - 1, "%u", cpu);
+ int last_len = len;
+
+ while ((cpu = cpumask_next(cpu, hctx->cpumask)) < nr_cpu_ids) {
+ int cur_len = snprintf(page + len, PAGE_SIZE - 1 - len,
+ ", %u", cpu);
+ if (cur_len >= PAGE_SIZE - 1 - len) {
+ len -= last_len;
+ len += snprintf(page + len, PAGE_SIZE - 1 - len,
+ "...");
+ break;
+ }
+ len += cur_len;
+ last_len = cur_len;
}
- ret += sprintf(ret + page, "\n");
- return ret;
+ len += snprintf(page + len, PAGE_SIZE - 1 - len, "\n");
+ return len;
}
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_tags = {
--
2.20.1
next reply other threads:[~2019-08-15 12:15 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-15 12:15 Ming Lei [this message]
2019-08-15 12:24 ` [PATCH] blk-mq: avoid sysfs buffer overflow by too many CPU cores Greg KH
2019-08-15 12:29 ` Ming Lei
2019-08-15 12:35 ` Greg KH
2019-08-15 12:43 ` Ming Lei
2019-08-15 13:21 ` Greg KH
2019-08-15 23:10 ` Ray, Mark C (Global Solutions Engineering (GSE))
2019-08-16 2:49 ` Ming Lei
2019-08-16 7:12 ` Greg KH
2019-08-16 14:21 ` Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2019-11-02 8:02 Ming Lei
2019-11-02 14:03 ` Jens Axboe
2019-11-03 0:25 ` Chaitanya Kulkarni
2019-11-03 15:02 ` Jens Axboe
2019-11-03 20:26 ` Chaitanya Kulkarni
2019-11-03 23:57 ` Ming Lei
2019-11-04 1:56 ` Jens Axboe
2019-11-04 6:57 ` Hannes Reinecke
2019-11-04 14:13 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190815121518.16675-1-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
--cc=mark.ray@hpe.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).