linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Question on setting IO polling behavior and documentations
@ 2025-08-14  5:14 Teng Qin
  2025-08-14  9:31 ` Yu Kuai
  0 siblings, 1 reply; 4+ messages in thread
From: Teng Qin @ 2025-08-14  5:14 UTC (permalink / raw)
  To: linux-block, hch; +Cc: inux-nvme, axboe, sagi

Hello maintainers,
I'm trying to explore and test IO polling behavior of block devices
in my system, NVMe drives to be specific. Upon trying, I noticed the
legacy /sys/block/<disk>/queue/io_poll no longer changes the polling
behavior of the device correctly.

I found out the change from
  a614dd228035 block: don't allow writing to the poll queue attribute
(https://lore.kernel.org/all/20211012111226.760968-16-hch@lst.de/)
The dmesg prompts user to use driver specific parameters, but for this
case, I can not find, either from code or documentation, parameter
from the nvme driver to set polling behavior for the drive similar to
the legacy io_poll sysfs interface.
I realized I can use flags from either io_uring or the nvme passthrough
commands to specify polling behavior. But are there still some configs
I can make to change the entire drive's IO to polling, so that legacy
applications not using io_uring can still have it?
Upon reading the entire patch set, it feels to me that since we are
changing the polling control to a per-bio flag, is drive-wide control
of polling behavior just straight-out impossible now?

Moreover, the block layer documentation at
  Documentation/ABI/stable/sysfs-block
still documents the legacy behavior of the io_poll sysfs file. This is
confusing for users trying to figure out reason of the failed or
unexpected behavior after writing to the file and seeing the dmesg,
particularly because there are many articles on the Internet describing
the legacy behavior.
If the maintainers agree, I can help update these documentations.

Thanks a lot for your kind help!

Teng Qin

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question on setting IO polling behavior and documentations
  2025-08-14  5:14 Question on setting IO polling behavior and documentations Teng Qin
@ 2025-08-14  9:31 ` Yu Kuai
  2025-08-14 22:35   ` Teng Qin
  0 siblings, 1 reply; 4+ messages in thread
From: Yu Kuai @ 2025-08-14  9:31 UTC (permalink / raw)
  To: Teng Qin, linux-block, hch; +Cc: inux-nvme, axboe, sagi, yukuai (C)

Hi,

在 2025/08/14 13:14, Teng Qin 写道:
> Moreover, the block layer documentation at
>    Documentation/ABI/stable/sysfs-block
> still documents the legacy behavior of the io_poll sysfs file. This is
> confusing for users trying to figure out reason of the failed or
> unexpected behavior after writing to the file and seeing the dmesg,
> particularly because there are many articles on the Internet describing
> the legacy behavior.
> If the maintainers agree, I can help update these documentations.

Feel free to update the documentations, AFAIK, there are some out of
date descriptions and it's welcome to fix them.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question on setting IO polling behavior and documentations
  2025-08-14  9:31 ` Yu Kuai
@ 2025-08-14 22:35   ` Teng Qin
  2025-08-16  3:59     ` Keith Busch
  0 siblings, 1 reply; 4+ messages in thread
From: Teng Qin @ 2025-08-14 22:35 UTC (permalink / raw)
  To: Yu Kuai, linux-block; +Cc: hch, axboe, sagi, yukuai (C)

On Thu, Aug 14, 2025 at 5:31 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2025/08/14 13:14, Teng Qin 写道:
>> Moreover, the block layer documentation at
>>    Documentation/ABI/stable/sysfs-block
>> still documents the legacy behavior of the io_poll sysfs file. This is
>> confusing for users trying to figure out reason of the failed or
>> unexpected behavior after writing to the file and seeing the dmesg,
>> particularly because there are many articles on the Internet describing
>> the legacy behavior.
>> If the maintainers agree, I can help update these documentations.
>
> Feel free to update the documentations, AFAIK, there are some out of
> date descriptions and it's welcome to fix them

Thanks a lot for the information. Before writing anything, I just
want to confirm there is indeed no more per-device control for
polling behavior? Is io_uring and driver-specific features like
nvme passthrough the only ways to go right now?

For users who have legacy applications that could benefit from
polling but still make traditional IO calls, would it still
make sense to add a per-device override? I can think of some
ways of adding a config for a specific device so it would
tag all bio-s for that device as polling (if queue capable).
But I'm not sure if that has been discussed before or maybe
that was intentionally discouraged? Would love to hear from
the maintainers for opinion.

>
>
> Thanks,
> Kuai
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question on setting IO polling behavior and documentations
  2025-08-14 22:35   ` Teng Qin
@ 2025-08-16  3:59     ` Keith Busch
  0 siblings, 0 replies; 4+ messages in thread
From: Keith Busch @ 2025-08-16  3:59 UTC (permalink / raw)
  To: Teng Qin; +Cc: Yu Kuai, linux-block, hch, axboe, sagi, yukuai (C)

On Thu, Aug 14, 2025 at 06:35:01PM -0400, Teng Qin wrote:
> On Thu, Aug 14, 2025 at 5:31 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > Hi,
> >
> > 在 2025/08/14 13:14, Teng Qin 写道:
> >> Moreover, the block layer documentation at
> >>    Documentation/ABI/stable/sysfs-block
> >> still documents the legacy behavior of the io_poll sysfs file. This is
> >> confusing for users trying to figure out reason of the failed or
> >> unexpected behavior after writing to the file and seeing the dmesg,
> >> particularly because there are many articles on the Internet describing
> >> the legacy behavior.
> >> If the maintainers agree, I can help update these documentations.
> >
> > Feel free to update the documentations, AFAIK, there are some out of
> > date descriptions and it's welcome to fix them
> 
> Thanks a lot for the information. Before writing anything, I just
> want to confirm there is indeed no more per-device control for
> polling behavior? Is io_uring and driver-specific features like
> nvme passthrough the only ways to go right now?
> 
> For users who have legacy applications that could benefit from
> polling but still make traditional IO calls, would it still
> make sense to add a per-device override? I can think of some
> ways of adding a config for a specific device so it would
> tag all bio-s for that device as polling (if queue capable).
> But I'm not sure if that has been discussed before or maybe
> that was intentionally discouraged? Would love to hear from
> the maintainers for opinion.

You can only reach the polling features through io_uring. You can use it
with normal read/write uring commands, or the nvme passthrough uring
commands. Successfully using it requires you have set up your module
parameters to reserve some queues for polling.

The synchronous calls (preadv2/pwritev2) had polling capabilities
removed due to issues. Here's the commit that removed it:

  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9650b453a3d4b1b8ed4ea8bcb9b40109608d1faf

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-16  3:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-14  5:14 Question on setting IO polling behavior and documentations Teng Qin
2025-08-14  9:31 ` Yu Kuai
2025-08-14 22:35   ` Teng Qin
2025-08-16  3:59     ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).