From: stuart hayes <stuart.w.hayes@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Lukas Wunner <lukas@wunner.de>
Cc: Michael Kelley <mhklinux@outlook.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Rafael J . Wysocki" <rafael@kernel.org>,
Martin Belanger <Martin.Belanger@dell.com>,
Oliver O'Halloran <oohall@gmail.com>,
Daniel Wagner <dwagner@suse.de>, Keith Busch <kbusch@kernel.org>,
David Jeffery <djeffery@redhat.com>,
Jeremy Allison <jallison@ciq.com>, Jens Axboe <axboe@fb.com>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Nathan Chancellor <nathan@kernel.org>,
Jan Kiszka <jan.kiszka@seimens.com>,
Bert Karwatzki <spasswolf@web.de>
Subject: Re: [PATCH v9 0/4] shut down devices asynchronously
Date: Fri, 18 Oct 2024 19:27:01 -0500 [thread overview]
Message-ID: <7ec51cc8-b64f-4956-b4e6-4b67f1a8fa76@gmail.com> (raw)
In-Reply-To: <2024101808-subscribe-unwrapped-ee3d@gregkh>
On 10/18/2024 4:37 AM, Greg Kroah-Hartman wrote:
> On Fri, Oct 18, 2024 at 11:14:51AM +0200, Lukas Wunner wrote:
>> On Fri, Oct 18, 2024 at 07:49:51AM +0200, Greg Kroah-Hartman wrote:
>>> On Fri, Oct 18, 2024 at 03:26:05AM +0000, Michael Kelley wrote:
>>>> In the process, the workqueue code spins up additional worker threads
>>>> to handle the load. On the Hyper-V VM, 210 to 230 new kernel
>>>> threads are created during device_shutdown(), depending on the
>>>> timing. On the Pi 5, 253 are created. The max for this workqueue is
>>>> WQ_DFL_ACTIVE (256).
>> [...]
>>> I don't think we can put this type of load on all systems just to handle
>>> one specific type of "bad" hardware that takes long periods of time to
>>> shutdown, sorry.
>>
>> Parallelizing shutdown means shorter reboot times, less downtime,
>> less cost for CSPs.
>
> For some systems, yes, but as have been seen here, it comes at the
> offset of a huge CPU load at shutdown, with sometimes longer reboot
> times.
>
>> Modern servers (e.g. Sierra Forest with 288 cores) should handle
>> this load easily and may see significant benefits from parallelization.
>
> "may see", can you test this?
>
>> Perhaps a solution is to cap async shutdown based on the number of cores,
>> but always use async for certain device classes (e.g. nvme_subsys_class)?
>
> Maybe, but as-is, we can't take the changes this way, sorry. That is a
> regression from the situation of working hardware that many people have.
>
> thanks,
>
> greg k-h
Thank you both for your time and effort considering this. It didn't
occur to me that an extra few 10s of milliseconds (or maxing out the
async workqueue) would be an issue.
To answer your earlier question (Michael), there shouldn't be a
possibility of deadlock regardless of the number of devices. While the
device shutdowns are scheduled on a workqueue rather than run in a loop,
they are still scheduled in the same order as they are without this
patch, any any device that is scheduled for shutdown should never have
to wait for device that hasn't yet been scheduled. So even if only one
device shutdown could be scheduled at a time, it should still work
without deadlocking--it just wouldn't be able to do shutdowns in parallel.
And I believe there is still a benefit to having async shutdown enabled
even with one core. The NVMe shutdowns that take a while involve waiting
for drives to finish commands, so they are mostly just sleeping.
Workqueues will schedule another worker if one worker sleeps, so even a
single core system should be able to get a number of NVMe drives started
on their shutdowns in parallel.
I'll see what I can to do limit the amount of stuff that gets put on the
workqueue, though. I can likely limit it to just the asynchronous
device shutdowns (NVMe shutdowns), plus any devices that have to wait
for them (i.e., any devices of which they are dependents or consumers).
next prev parent reply other threads:[~2024-10-19 0:27 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-09 17:57 [PATCH v9 0/4] shut down devices asynchronously Stuart Hayes
2024-10-09 17:57 ` [PATCH v9 1/4] driver core: don't always lock parent in shutdown Stuart Hayes
2024-10-09 17:57 ` [PATCH v9 2/4] driver core: separate function to shutdown one device Stuart Hayes
2024-10-09 17:57 ` [PATCH v9 3/4] driver core: shut down devices asynchronously Stuart Hayes
2024-10-09 17:57 ` [PATCH v9 4/4] nvme-pci: Make driver prefer asynchronous shutdown Stuart Hayes
2024-10-11 4:22 ` [PATCH v9 0/4] shut down devices asynchronously Michael Kelley
2024-10-11 15:52 ` Laurence Oberman
2024-10-18 3:26 ` Michael Kelley
2024-10-18 5:49 ` Greg Kroah-Hartman
2024-10-18 9:14 ` Lukas Wunner
2024-10-18 9:37 ` Greg Kroah-Hartman
2024-10-19 0:27 ` stuart hayes [this message]
2024-10-20 0:24 ` Michael Kelley
2024-10-29 15:32 ` Christoph Hellwig
2025-01-30 3:06 ` [PATCH] driver core: optimize async device shutdown Sultan Alsawaf
2025-01-30 17:12 ` [PATCH v2] " Sultan Alsawaf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7ec51cc8-b64f-4956-b4e6-4b67f1a8fa76@gmail.com \
--to=stuart.w.hayes@gmail.com \
--cc=Martin.Belanger@dell.com \
--cc=axboe@fb.com \
--cc=djeffery@redhat.com \
--cc=dwagner@suse.de \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=jallison@ciq.com \
--cc=jan.kiszka@seimens.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=lukas@wunner.de \
--cc=mhklinux@outlook.com \
--cc=nathan@kernel.org \
--cc=oohall@gmail.com \
--cc=rafael@kernel.org \
--cc=sagi@grimberg.me \
--cc=spasswolf@web.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox