All of lore.kernel.org
 help / color / mirror / Atom feed
From: Louis Sautier <sautier.louis@gmail.com>
To: sashiko-reviews@lists.linux.dev
Cc: linux-hwmon@vger.kernel.org
Subject: Re: [PATCH 2/2] scsi: mpt3sas: add hwmon support
Date: Thu, 14 May 2026 05:39:04 +0200	[thread overview]
Message-ID: <106e2981-35a1-4559-ae20-2026c224f590@gmail.com> (raw)
In-Reply-To: <20260514010209.5C3ABC19425@smtp.kernel.org>

On 14/05/2026 03:02, sashiko-bot@kernel.org wrote:
> Could exposing this read to unprivileged users lead to excessive firmware
> requests and mutex contention?
> 
> Since the sysfs files are world-readable, any local user can read them in a
> tight loop. Every read directly calls mpt3sas_config_get_iounit_pg7(), which
> issues synchronous requests to the firmware over the PCIe bus and acquires
> the shared config_cmds.mutex:
> 
> _hwmon_read() ->
>     mpt3sas_config_get_iounit_pg7() ->
>         _config_request() ->
>             mutex_lock(&ioc->config_cmds.mutex)
> 
> Does this allow an unprivileged user to starve other storage subsystem
> components that rely on this mutex for administrative tasks or device
> discovery?
> 
> Could we implement a caching mechanism (for example, caching the readings
> for a short period using jiffies) to rate-limit these firmware requests?

My initial implementation had a 1-second jiffies-based cache; I
dropped it after seeing that drivers/nvme/host/hwmon.c follows the
same no-cache pattern.

The cover letter's contention figure came from a less direct
measurement and was too optimistic. I re-measured directly on
ioc->config_cmds.mutex, hammering the sysfs temperature file from
N concurrent unprivileged workers on a 9500-8i / SAS3816 while a
separate "administrative" reader runs in the foreground. The same
setup against the system's NVMe is included below for comparison:

hwmon: mpt3sas (/sys/class/hwmon/hwmon3/temp1_input)
nproc: 16

baseline (no concurrent readers): p50=14.6µs  p95=18.9µs

Foreground reader latency with N concurrent unprivileged hammers:

  N   agg r/s   p50 µs   p95 µs   max µs
  1     65244     38.0     60.2     60.8
  2     57722     57.0     60.1    120.5
  4     55026     90.8    115.5    152.9
  8     53688    207.2    247.9    300.1
 16     52188    345.8    390.3    444.8

hwmon: nvme (/sys/class/hwmon/hwmon2/temp1_input)
nproc: 16

baseline (no concurrent readers): p50=268.6µs  p95=289.4µs

Foreground reader latency with N concurrent unprivileged hammers:

  N   agg r/s   p50 µs   p95 µs   max µs
  1      3558    803.8    839.8   1008.3
  2      3549   1342.0   1443.6   1558.7
  4      3543   2154.8   2473.0   2756.6
  8      3518   4130.9   4459.6   4677.4
 16      3522   7394.6   8743.6   8804.3

Aggregate throughput is mutex-bound: every hwmon read takes
config_cmds.mutex, so contention is bounded. The worst
foreground p95 I observed was ~8.5 ms at N=nproc=96 on the
9305-24i (below).

Across the other two test boxes:
- 9305-24i / SAS3224 (nproc=96): mpt3sas baseline 31 µs, p95 at
  N=96 ~8.5 ms; same shape, ~2x slower per read than the more
  recent 9500-8i / SAS3816 above.
- 9211-4i / SAS2004: no mpt3sas sensors reported, no hwmon
  device registered, no contention surface (graceful-skip path
  in the cover letter).

If a maintainer thinks the cache should come back to bound this
independently of the attacker count, I am happy to re-add it.

  reply	other threads:[~2026-05-14  3:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-12 21:47 [PATCH 0/2] scsi: mpt3sas: add hwmon support Louis Sautier
2026-05-12 21:47 ` [PATCH 1/2] scsi: mpt3sas: add IO Unit Page 7 config accessor Louis Sautier
2026-05-12 21:47 ` [PATCH 2/2] scsi: mpt3sas: add hwmon support Louis Sautier
2026-05-13  3:57   ` Guenter Roeck
2026-05-14  1:11     ` Louis Sautier
2026-05-14  1:26       ` Guenter Roeck
2026-05-14  1:02   ` sashiko-bot
2026-05-14  3:39     ` Louis Sautier [this message]
2026-05-14  5:17       ` Guenter Roeck
2026-05-14 19:57         ` Louis Sautier
2026-05-14 20:58           ` Guenter Roeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=106e2981-35a1-4559-ae20-2026c224f590@gmail.com \
    --to=sautier.louis@gmail.com \
    --cc=linux-hwmon@vger.kernel.org \
    --cc=sashiko-reviews@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.