From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30F143D5662 for ; Wed, 15 Apr 2026 17:40:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.50 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776274823; cv=none; b=Fqdo3BPDNCplAMpU44w4zSVYMEAqR2g0YynG6Gpbdb+STYnV0n4AVIH3XNnul4yvbh/9vojZyD2aPwxs1faHKaYfRaulrFg6aFy6eBn21O8JiT1WPhsOQuciPhtWBHxooCtRO9Ca34DnMbo39F14yb+n2NpAwIRFNMzUA9yh9T4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776274823; c=relaxed/simple; bh=Z/9VzlpYWM4Y/6rZcMIzAhM6BfBFDNWqNNGIeASOG+c=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=XtW43BZ5KHcf4BZpRdKktMcgffSB4eL9HQT23lMh56BgrSV+9viOzAXqI6TF12lErq4tJ0Imv3nZbjFvWeLj7uCWEJ22nkKU5NKHRaz3rLdQyFWf4HJ7CcSje49T41qEt0WYC+29ZWzPQh3txU/D4astk7tARviqUgbz2idhUUQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P50uHF1q; arc=none smtp.client-ip=209.85.216.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P50uHF1q" Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-35d971fb6f1so5853748a91.0 for ; Wed, 15 Apr 2026 10:40:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776274821; x=1776879621; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Xs6hE61joQieGjzSDamvQRkpCvN6ccFcScM9HaFTdM4=; b=P50uHF1qlo1Fd5Gn9xZixN+tpdxGeYD/t3Ay36iqIazHtGaTxss4qe6PJHs1Amb69V 3c2bRqJcqqFbO0U5vcbSYGJAkerV9ttqYG2299U/HxedaMOloqR0tpago+U9JnK+ctUF C12lkb2VGHoAkpgdUq29/9Yw02ikX3G/G0aOtRH4+Yzt04kUaiP7AtSSzCQvMTESasmP IlBZVR8qjz1EtJcD9XAhaRn8Fc00ynV1O7EBcDOzdHvbTuw9W02OQcRpOGJEHF36ALpY wuUKcoyfRfgt+dISM0jTkjtTxxrPF94+g8l/NSoNYDQc9yScw13rFrAWoagZQcVE+9FY MYng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776274821; x=1776879621; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Xs6hE61joQieGjzSDamvQRkpCvN6ccFcScM9HaFTdM4=; b=h+lUkZjv1zV1pS6qMTqIOwKtjiY4FWmNThHBr7+mOjfiVgmQTo0FadbOz0Suzy15DQ n+72qZcsFGrL6xeu4OsFPI48eRsJ9ezl2Ab9FflWt1BJpilzurpu1owaDYlqFylY75/d /QqJGE7eruzGa61Z1XhdYVeghWspUQAlnvGmMKnTQGYVaTaeApvmIei4DpvzfaAuauax 0jUpnd9g57wO0baEJc/kldK36KNKF88QUfgFYrM/avO2wAvi7puApndic+qbF9TW/d0l O7s1H42e4knWcMwd5bGFyrwyWDrpI3Hzi6jkURmZZ+XcAN0hq2c7x85JAyQzsCCaDq4g n1SA== X-Forwarded-Encrypted: i=1; AFNElJ+VjGCuLF73BiIZ2B8WALKv9XLfk0DonwyB44ndeqrFlkv6ZX3Tqu4Ip21acm9U9mFdzMgeqhDRmU5jlWQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwVD7qHtN0iZe6/C64Dly6FOu1nwmEjlIS1N+AtWxFVyhAnhjoa KR+B9cBVUK8NE85QJrUDDpoilIyPFDeY4wkaGm1wWGEEm7CghwHiBh0K X-Gm-Gg: AeBDiess7muZ1N3dK7XKNd2Falaw+XYp/3K5KpkJSRuvKL2qumSgdPqjg29+1FkPp8J 4gn3DEdjUoLoqVHu/3feZxRTJlIKjNXslWBuOlyu+eQXmCOGiZgOhRSYiJf4DQcFL6tu5mlFOA8 nz4Huk5goP8bwp58rbtcqjqPQ8sdmqMFDfsB+hLP9WEYfkf153n2JgO01JYLhT4eNIhl9LtYKN0 CKU39h2/ClBhOcjcIxwt2m2xfAXrO5ZMMuTR3RB81Ow5E1FwsJYk8pHwhcQI7YQBfh6Ha9JjZsu lml0U6dGBzPOCFd1fpV/5e7kpUUIvDFJZVrGb/mG2Upwznq2KKIzNL6pFmQlNFmLz0I8oaNN71g /I6OCrt2TUo4SIvG2N0neWwbiiLVu5zeODSBxYODUXod8kKicsDeQKWWMz6qi9EGClbP5NlQFsr lFwVMLiZYuhWgRVC9JGEli7NCdTGDHgVYDdoVk94GP907ox/bDtX8lDGY5Y5B8F254/8Gc+0ho9 4s1iLeqCmc4RDEoQl7xININDdGyYXzgQwJtxcCPm655Tx7T X-Received: by 2002:a17:90a:e70d:b0:35e:56ef:d5f3 with SMTP id 98e67ed59e1d1-35fcd2638f8mr5322173a91.28.1776274821423; Wed, 15 Apr 2026 10:40:21 -0700 (PDT) Received: from baver-zenith.localdomain ([124.49.88.131]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-35fd1fe9b50sm2891026a91.3.2026.04.15.10.40.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Apr 2026 10:40:21 -0700 (PDT) From: Sungho Bae To: mst@redhat.com, jasowang@redhat.com Cc: xuanzhuo@linux.alibaba.com, eperezma@redhat.com, virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, Sungho Bae Subject: [RFC PATCH v1 0/4] virtio: add noirq system sleep PM callbacks for virtio-mmio Date: Thu, 16 Apr 2026 02:38:29 +0900 Message-Id: <20260415173833.6319-1-baver.bae@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sungho Bae Hi all, Some virtio-mmio based devices, such as virtio-clock or virtio-regulator, must become operational before other devices have their regular PM restore callbacks invoked, because those other devices depend on them. Generally, PM framework provides the three phases (freeze, freeze_late, freeze_noirq) for the system sleep sequence, and the corresponding resume phases. But, virtio core only supports the normal freeze/restore phase, so virtio drivers have no way to participate in the noirq phase, which runs with IRQs disabled and is guaranteed to run before any normal-phase restore callbacks. This series adds the infrastructure and the virtio-mmio transport wiring so that virtio drivers can implement freeze_noirq/restore_noirq callbacks. Design overview =============== The noirq phase runs with device IRQ handlers disabled and must avoid sleepable operations. The main constraints addressed are: - might_sleep() in virtio_add_status() and virtio_features_ok(). - virtio_synchronize_cbs() in virtio_reset_device() (IRQs are already quiesced). - spin_lock_irq() in virtio_config_core_enable() (not safe to call with interrupts already disabled). - Memory allocation during vq setup (virtqueue_reinit_vring() reuses existing buffers instead). The series provides noirq-safe variants for each of these, plus a new config_ops->reset_vqs() callback that lets the transport reprogram queue registers without freeing/reallocating vring memory. When a driver implements restore_noirq, the device bring-up (reset -> ACKNOWLEDGE -> DRIVER -> finalize_features -> FEATURES_OK) happens in the noirq phase. The subsequent normal-phase virtio_device_restore() detects this and skips the redundant re-initialization. Patch breakdown =============== Patch 1 is a preparatory refactoring with no functional change. Patches 2-3 add the core infrastructure. Patch 4 wires it up for virtio-mmio. 1. virtio: separate PM restore and reset_done paths Splits virtio_device_restore_priv() into independent virtio_device_restore() and virtio_device_reset_done() paths, using a shared virtio_device_reinit() helper. This is a pure refactoring to make the restore path independently extensible without complicating the boolean dispatch. 2. virtio_ring: export virtqueue_reinit_vring() for noirq restore Adds a thin exported wrapper around the existing static virtqueue_reset_split()/virtqueue_reset_packed() helpers, which reset vring indices and descriptor state in place without any memory allocation. This makes them usable from the transport's reset_vqs() callback in noirq context. 3. virtio: add noirq system sleep PM infrastructure Adds noirq-safe helpers (virtio_add_status_noirq, virtio_features_ok_noirq, virtio_reset_device_noirq, virtio_config_core_enable_noirq, virtio_device_ready_noirq) and the freeze_noirq/restore_noirq driver callbacks plus the config_ops->reset_vqs() transport hook. Modifies virtio_device_restore() to skip bring-up when restore_noirq already ran. 4. virtio-mmio: wire up noirq system sleep PM callbacks Implements vm_reset_vqs() which iterates existing virtqueues, reinitializes the vring state via virtqueue_reinit_vring(), and reprograms the MMIO queue registers. Adds virtio_mmio_freeze_noirq/virtio_mmio_restore_noirq and registers them via SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(). Testing ======= Build-tested with arm64 cross-compilation. (make ARCH=arm64 M=drivers/virtio) Runtime-tested on an internal virtio-mmio platform with virtio-clock, confirming that the clock device works well before other devices' normal restore() callbacks run. Questions for reviewers ======================= - virtio_config_core_enable_noirq() is currently a separate helper that duplicates the spin_lock logic, but it only differs from virtio_config_core_enable() by the locking context. Would it be better to merge them into a single function with irqsafe locking, or is the current separate-helper approach acceptable? - Should reset_vqs be a mandatory callback when restore_noirq is used, or should the core silently skip vq re-initialization when the vq list is empty (current behavior)? - Some functions, such as get_status() and set_status(), are also used in the noirq restore path. Therefore, it is assumed that drivers using noirq callbacks have implemented these functions usable in the noirq context. Is the current implementation acceptable? Or is it necessary to create a flag to indicate the driver supports them in the noirq context, or have a separate noirq callback? Sungho Bae (4): virtio: separate PM restore and reset_done paths virtio_ring: export virtqueue_reinit_vring() for noirq restore virtio: add noirq system sleep PM infrastructure virtio-mmio: wire up noirq system sleep PM callbacks drivers/virtio/virtio.c | 234 ++++++++++++++++++++++++++++++---- drivers/virtio/virtio_mmio.c | 79 ++++++++++++ drivers/virtio/virtio_ring.c | 19 +++ include/linux/virtio.h | 8 ++ include/linux/virtio_config.h | 29 +++++ include/linux/virtio_ring.h | 3 + 6 files changed, 346 insertions(+), 26 deletions(-) -- 2.34.1