From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58F5535A3BF for ; Thu, 23 Apr 2026 17:40:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776966055; cv=none; b=uMoHnwQtEvO5ztT2J1osKTiRLHkbXUkNEou8bh/STKfZHhgPE8ImgCKv/eVsN8PrvZOBA24ED9k0n45JH6TQd4BJcYAiA1jfj4Jkt6G5/vObSFQpM+Pa8ErQdMFPqI8jW6+SV4ucfirACSPPVoNr/zHXUG/IaEMjdACsmlZd/+s= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776966055; c=relaxed/simple; bh=wGcdgtPZygfHo8FJMy41ko0c+pH1Q0Jfh+Jb2HOmRKE=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=QTJu9IW8TcCkcbTOrXnYe8mFwVZGhnpLi1B8V3EKN/g7QAZOdZcrmgKBwADcc2hQNC2Siqa0LMjP30Ydw4rLZnqFRqSmm3EIkbSUkGrDgZHdSZF42D4jNZmMv9e8+tw1e4v5TOwA+E2kC23/y2D3JPjsN3Xe2T6BjJpPhJWS9B4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZdKfJgfK; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZdKfJgfK" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-82f69a286dbso5338138b3a.2 for ; Thu, 23 Apr 2026 10:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776966053; x=1777570853; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=vhPcOtlTSnq+IPIdCokakonQWrGMmRZhayhnS9aOfQY=; b=ZdKfJgfKMF0Vdlpwpk41vstukRmwPKOcXU6qrqSwo5ZL7XZlIeypx2c2MH0Y2Zgyq0 1jyVbq/arP1J2QuYPNbW+peGWiPt7eTcl9hTkwxqttIxEL1voOAYn2op1Fhe3Twu3ilr 2LdVdjZFTM81ZaM3PGSaoVsjf2p4USAdCU+9yHDo1BSXLmNZcZxEkmMEsnH1jhL9f7SP 12aIOLuD4KMZCxlUAKn4FV24ktBet0AmzSIQuKQn9V0psP/Si1BPegSKOyjA7gHvjois l7G+XS13IC6JwM1LnjIdKpn3VnbmanqA3eLVbta0fdUuggPVKiyFN9cxRN9VAWQFjBap BEUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776966053; x=1777570853; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=vhPcOtlTSnq+IPIdCokakonQWrGMmRZhayhnS9aOfQY=; b=NTq10Ok74SNK4niXG8HVERnsBXGNLFyojshQXU+vKYLS8tdd8YDJdYcmmfrDRQQqHt P5Nk6w34N2UsE60R36n9lHY1wXOM5WtzZigcXikmbej18OGW61a4FrPxvOt+8qEmUuiL ARRLDapmSh69m+8EoczgucI1WMVWMEGo5TnKjU4Inqe0ulrM1YkS8nqCycNLPcPJMUei DAx4+kIzWNpHUILDmf8Ox644SJ/svvLOiu6WJJIXDIWc7MvQr6EyD2k5hxMeu/lZXHyy 9QehmwPSsWYO3/PBilOb2v2affQJMZ24uUVoW/a5PEcox2xWiapxiW6Eg5kFMmmLM1Og f/bw== X-Forwarded-Encrypted: i=1; AFNElJ9MQY9RiMLyt9Vc50vd1Zz3GEN8he1oVR5TPd9KhYvGepi+EdkY4pmp7CsDhqJ0duJlYK90dv95FV8+9xU=@vger.kernel.org X-Gm-Message-State: AOJu0YzCjltul3lotybGX759UbBjWz5+WjrJu0PVcsApuPNPqv0DxNmV cGeFbsaU+5O86RnV1N+7umJRGUufnAiC+8XLkeqOkE+xpW34Ym80nX8y X-Gm-Gg: AeBDieud2PyEHs+MsI0lWmDPbs7ga29IpWsCPqlmFTpjh0jO4Sf330kYRc1yik2bTMQ 2TbdqngOT0JQCcb7w6joi168NZPj4AVBRjrEh1CW2U600r7lJnbHOC6DFGDUXn+G4r9Upv/JLWf GpDHvHXAco3PUdEjHqZ3rDNVsnUjdaNKaWjzW7tSkHzkzX60thb80wthpCNo1E3o/tTK9drBXnj i3Hs08UYTTAml8HTwq8ZehZOdssHbv9hd2bYouPXzC5HhQ0DpybX8wXR1tWkbzO9vPBdPh7KisQ /oTeVQQzhyM4Ft6DamB+yt6xj6AS0yL4r3S/AE4CsMmZoyubegadU3mTQGpN1KT0XIrx7A5OEyF JzsQ2gB4i/YN7ii3F0o3QWsfWU9DncoCB3EzEtOtqcZAfYQLe2+V+MyIaXToBQlV8Kd3O7U6wnh ocPPYFRcgzDSv0mtUtEAFiZU0G7Dbn0XbkkHGEmLcaKLq8iHh8xctJqaPxp8QTyQwozjUXK2gS9 n2ty8X1Kr982CqtMPUHVom8jvxS+1FZsBFxtQ== X-Received: by 2002:a05:6a00:18a8:b0:81f:3fa0:8c38 with SMTP id d2e1a72fcca58-82f8c84add7mr31799280b3a.20.1776966052417; Thu, 23 Apr 2026 10:40:52 -0700 (PDT) Received: from baver-zenith.localdomain ([124.49.88.131]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82f8ebcc9easm27014498b3a.39.2026.04.23.10.40.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Apr 2026 10:40:51 -0700 (PDT) From: Sungho Bae To: mst@redhat.com, jasowang@redhat.com Cc: xuanzhuo@linux.alibaba.com, eperezma@redhat.com, virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, Sungho Bae Subject: [RFC PATCH v4 0/4] virtio: add noirq system sleep PM callbacks for virtio-mmio Date: Fri, 24 Apr 2026 02:40:35 +0900 Message-Id: <20260423174039.276-1-baver.bae@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sungho Bae Hi all, Some virtio-mmio based devices, such as virtio-clock or virtio-regulator, must become operational before other devices have their regular PM restore callbacks invoked, because those other devices depend on them. Generally, PM framework provides the three phases (freeze, freeze_late, freeze_noirq) for the system sleep sequence, and the corresponding resume phases. But, virtio core only supports the normal freeze/restore phase, so virtio drivers have no way to participate in the noirq phase, which runs with IRQs disabled and is guaranteed to run before any normal-phase restore callbacks. This series adds the infrastructure and the virtio-mmio transport wiring so that virtio drivers can implement freeze_noirq/restore_noirq callbacks. Design overview =============== The noirq phase runs with device IRQ handlers disabled and must avoid sleepable operations. The main constraints addressed are: - might_sleep() in virtio_add_status() and virtio_features_ok(). - virtio_synchronize_cbs() in virtio_reset_device() (IRQs are already quiesced). - spin_lock_irq() in virtio_config_core_enable() (not safe to call with interrupts already disabled). - Memory allocation during vq setup (virtqueue_reinit_vring() reuses existing buffers instead). The series provides noirq-safe variants for each of these, plus a new config_ops->reset_vqs() callback that lets the transport reprogram queue registers without freeing/reallocating vring memory. Not all transports can safely perform these operations in the noirq phase. Transports like virtio-ccw issue channel commands and wait for a completion interrupt, which will never arrive while device interrupts are masked at the interrupt controller. A new boolean field config_ops->noirq_safe marks transports that implement reset/status operations via simple MMIO reads/writes and are therefore safe to use in noirq context. The noirq helpers assert this flag at runtime, and virtio_device_freeze_noirq() enforces it at freeze time, returning -EOPNOTSUPP early to prevent a deadlock on resume. When a driver implements restore_noirq, the device bring-up (reset -> ACKNOWLEDGE -> DRIVER -> finalize_features -> FEATURES_OK) happens in the noirq phase. The subsequent normal-phase virtio_device_restore() detects this and skips the redundant re-initialization. Patch breakdown =============== Patch 1 is a preparatory refactoring with no functional change. Patches 2-3 add the core infrastructure. Patch 4 wires it up for virtio-mmio. 1. virtio: separate PM restore and reset_done paths Splits virtio_device_restore_priv() into independent virtio_device_restore() and virtio_device_reset_done() paths, using a shared virtio_device_reinit() helper. This is a pure refactoring to make the restore path independently extensible without complicating the boolean dispatch. 2. virtio_ring: export virtqueue_reinit_vring() for noirq restore Adds virtqueue_reinit_vring(), an exported wrapper that resets vring indices and descriptor state in place without any memory allocation, making it safe to call from noirq context. Also resets IN_ORDER-specific state (free_head, batch_last.id) in virtqueue_init() to keep the ring consistent after reinit, and adds runtime WARN_ON checks for unexpected in-flight descriptor state and split-ring index consistency. 3. virtio: add noirq system sleep PM infrastructure Adds noirq-safe helpers (virtio_add_status_noirq, virtio_features_ok_noirq, virtio_reset_device_noirq, virtio_config_core_enable_noirq, virtio_device_ready_noirq) and the freeze_noirq/restore_noirq driver callbacks plus the config_ops->reset_vqs() transport hook. Introduces config_ops->noirq_safe to mark transports whose reset/status operations are safe in noirq context (e.g. simple MMIO), and enforces early -EOPNOTSUPP in virtio_device_freeze_noirq() when the transport does not meet noirq requirements. Modifies virtio_device_restore() to skip bring-up when restore_noirq already ran. 4. virtio-mmio: wire up noirq system sleep PM callbacks Implements vm_reset_vqs() which iterates existing virtqueues, reinitializes the vring state via virtqueue_reinit_vring(), and reprograms the MMIO queue registers. Adds virtio_mmio_freeze_noirq/virtio_mmio_restore_noirq and registers them via SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(). Sets .noirq_safe = true in virtio_mmio_config_ops to declare that MMIO-based status and reset operations are safe during the noirq PM phase. Testing ======= Build-tested with arm64 cross-compilation. (make ARCH=arm64 M=drivers/virtio) Runtime-tested on an internal virtio-mmio platform with virtio-clock, confirming that the clock device works well before other devices' normal restore() callbacks run. Changes ======= v4: virtio_ring: export virtqueue_reinit_vring() for noirq restore - Reinit safety was tightened by resetting IN_ORDER-specific state. - Added extra split-ring consistency WARN_ON checks. - Clarified caller preconditions for noirq-safe vring reinit. virtio: add noirq system sleep PM infrastructure - Added config_ops->noirq_safe to explicitly mark transports that are safe in noirq PM phase. - Enforced early -EOPNOTSUPP checks in freeze_noirq for unsupported transport combinations (noirq_safe/reset_vqs requirements). - Added defensive runtime guards/warnings in noirq helper and restore paths. - Discussed the freeze-before-freeze_noirq abort scenario raised in review and concluded that the fallback restore path is intentionally handled by regular restore after core reinit/reset, so reset_vqs is kept limited to the noirq restore flow. virtio-mmio: wire up noirq system sleep PM callbacks - Marked virtio-mmio transport as noirq-capable by setting .noirq_safe in virtio_mmio_config_ops. v3: virtio: separate PM restore and reset_done paths - Refined restore flow to explicitly handle the no-driver case after reinit, improving clarity and avoiding unnecessary driver-path assumptions. virtio_ring: export virtqueue_reinit_vring() for noirq restore - Hardened virtqueue_reinit_vring() with stronger safety notes and a runtime WARN_ON check to catch reinit with unexpected free-list state. virtio: add noirq system sleep PM infrastructure - Added explicit noirq restore completion tracking noirq_restore_done and updated PM sequencing to use it, plus early freeze_noirq validation for missing reset_vqs support. virtio-mmio: wire up noirq system sleep PM callbacks - Updated virtio-mmio restore path to skip legacy GUEST_PAGE_SIZE rewrite when noirq restore already completed. v2: virtio-mmio: wire up noirq system sleep PM callbacks - The code that was duplicated in vm_setup_vq() and vm_reset_vqs() has been moved to vm_active_vq() function. Sungho Bae (4): virtio: separate PM restore and reset_done paths virtio_ring: export virtqueue_reinit_vring() for noirq restore virtio: add noirq system sleep PM infrastructure virtio-mmio: wire up noirq system sleep PM callbacks drivers/virtio/virtio.c | 291 +++++++++++++++++++++++++++++++--- drivers/virtio/virtio_mmio.c | 134 +++++++++++----- drivers/virtio/virtio_ring.c | 51 ++++++ include/linux/virtio.h | 10 ++ include/linux/virtio_config.h | 39 +++++ include/linux/virtio_ring.h | 3 + 6 files changed, 462 insertions(+), 66 deletions(-) -- 2.43.0