From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0958B35F163 for ; Fri, 17 Apr 2026 13:35:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776432938; cv=none; b=M/1LJjUgoskh6k7aw2972/wzdiqx36aCHfWOUaJExOiXyhDfhL7937ZwOXjJXeYwNMXkiAOETYZaq3Bj3iHTVsJIHe6sxuN3Mz4z8iWCrafi3WVsTMCRp6LJB3NTj5ty+qUZhipM+7CLESTJ0jeEaep7IZn6QI+lcnwhepZkJUg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776432938; c=relaxed/simple; bh=4qDz94G7x8HI+wX0z4uKY10lcdoSui/8FCp1LBvnqno=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=hAQFDv/dxCr5EZ/7PZzzOKTaoCxGw/y49Kgw88JybxxtlBk6MHQjw5J57E1pyAHiPAiM6kbDdeoJJ8uhtEzRXadp9R7l+iNXYSXtW5pMRguXn4jZlvgHjpaXtcIuVjiCmC7KVzyDMdGsDDB4mz6Oo5oAP8HhWwVNLXbOQpZ9/ms= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=soId1dQa; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="soId1dQa" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-c70e27e2b74so254853a12.0 for ; Fri, 17 Apr 2026 06:35:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776432936; x=1777037736; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=dWCo8dQHdy7JLYOMp3LwgNCKyK1/Vth9lMKSBroTst8=; b=soId1dQaXiHhquNBXAQ0+8iqnDRcdDlFrgGOWz4Se3aTqaebojEfnXgF4dX6YI0N2M 5Dz6QEfDmaf9Ife5g8ARfk3NAykwd16incXXppSFsxHjcudpqcperCgO8eQMAmxcUCvO mEYI86RWZN6+mOy1OLmlK3xcsQkSWZn7iFt1smzvE1XEruulcczLI7G7wA53mf2ozYkE QgUdLtfCkLityZO8eGIKPrq8cn2PJeVdPUIO5ZxwRGwp/Fy7d/BTA5xQ7qXMzrle7Oos yPIoOa+Au2ud0PXKAyiMOyqSdy5WAk8T0lMTr/izd+oi9OPFTEpDbu4TbLDqe7/5Pmbh kOhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776432936; x=1777037736; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=dWCo8dQHdy7JLYOMp3LwgNCKyK1/Vth9lMKSBroTst8=; b=T6aHsOzkCEk20iGCRGFRSLfXSW7cz8osEZ+lp5+VNYHTjl8W844wsWY1YwFA/41pIP mRZwBkkxnd0k5psOatCiDE8e+5HeQGoO3nmkpuJtDJFCPVMFebOnQzm+CFxEAspeGZYb 1Z5hSjo22ZvZ6mIRpdxJVJkzhPTZUdwA/K1iIeIPrvKtg7j2GvNQnP6DGn44Sn+1xAZ7 6jQ8TyoG0c5EC60DITxwOlTb5SSh+FLUZUDYp7zhTeFmEowdXvJQdNBw8T9/tDkIPZlC Y3Q71RBbzrKpXaUX9wTvMXjdF7qQO+Bb5imrPBWJZVGN+Jz7d6CdPgNVfZORtELQsTDD jb5w== X-Forwarded-Encrypted: i=1; AFNElJ+ouq78ANJXoYjjeQsbeeyizTtxvSTx73nCLI09aZoSb3UJ97adjUhHt4Ix5eYJ3ozI5O5pVGnkf+ILjJg=@vger.kernel.org X-Gm-Message-State: AOJu0YzlZt5b4O9GGFEUWZxmV3VRg2Dk71bo+NqVEE9t5d+QD2beo4O4 UmpgJxrhpRP3f2giCXQL1leONJ/MCLzHaHBhFmvGKwICg3A5KRuLHMG+ X-Gm-Gg: AeBDiesYxodRKk8g840qFozjdkVqAQTDvO//956toICZIS5IvlKBzRgHMbYnYEbEvqZ +LLFfLKk7foGvtfOaDinHk/jFkTMJP27wgVn2kd8hE7dyuskI03vq5NECEYSHmt+vqV0ru/h1r/ bQuQ1c7+kmyDeVTXozN80PorXSKKnbJsYwvZRVAC2dG8ZsQ5gEYB4yEvOWQYvsNwsKATdGOKvBO 30ibjlt0VAp7sPYZp/6tKR0ml8UCjXynfaRsQXNr+wnzzCCHeZaJOnYyqUS5HU//f6lz0jRnBD+ yHokdKib3xGHqaUm62jJfuDyYEsVfAf3OtMLlII9hwsZTvxU112rZQgiuYLQN4GgJXqy4LTUQ9s 4AekWxDmieosqPjSHfHywAovbSljj1B6rPUmd48J7y178sNgGqEWg9/SeFom7uL1XCuC45LV9Jq 5L4/yUOLM+bdoiSoQQoxeqaVHWqFLecu1POiw0DNgDo6fsb+E8MQXefa4LeSbvHGgFFugGHdIrW j5pwSEJjSPhEZIMZOdwHRwwd3l8BTsOxXsCzg== X-Received: by 2002:a05:6a20:6a13:b0:39f:a42:9243 with SMTP id adf61e73a8af0-3a08d67e277mr3348334637.3.1776432936309; Fri, 17 Apr 2026 06:35:36 -0700 (PDT) Received: from baver-zenith.localdomain ([124.49.88.131]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c7976fa3604sm1421272a12.14.2026.04.17.06.35.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Apr 2026 06:35:35 -0700 (PDT) From: Sungho Bae To: mst@redhat.com, jasowang@redhat.com Cc: xuanzhuo@linux.alibaba.com, eperezma@redhat.com, virtualization@lists.linux.dev, linux-kernel@vger.kernel.org, Sungho Bae Subject: [RFC PATCH v2 0/4] virtio: add noirq system sleep PM callbacks for virtio-mmio Date: Fri, 17 Apr 2026 22:34:26 +0900 Message-Id: <20260417133430.507-1-baver.bae@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sungho Bae Hi all, Some virtio-mmio based devices, such as virtio-clock or virtio-regulator, must become operational before other devices have their regular PM restore callbacks invoked, because those other devices depend on them. Generally, PM framework provides the three phases (freeze, freeze_late, freeze_noirq) for the system sleep sequence, and the corresponding resume phases. But, virtio core only supports the normal freeze/restore phase, so virtio drivers have no way to participate in the noirq phase, which runs with IRQs disabled and is guaranteed to run before any normal-phase restore callbacks. This series adds the infrastructure and the virtio-mmio transport wiring so that virtio drivers can implement freeze_noirq/restore_noirq callbacks. Design overview =============== The noirq phase runs with device IRQ handlers disabled and must avoid sleepable operations. The main constraints addressed are: - might_sleep() in virtio_add_status() and virtio_features_ok(). - virtio_synchronize_cbs() in virtio_reset_device() (IRQs are already quiesced). - spin_lock_irq() in virtio_config_core_enable() (not safe to call with interrupts already disabled). - Memory allocation during vq setup (virtqueue_reinit_vring() reuses existing buffers instead). The series provides noirq-safe variants for each of these, plus a new config_ops->reset_vqs() callback that lets the transport reprogram queue registers without freeing/reallocating vring memory. When a driver implements restore_noirq, the device bring-up (reset -> ACKNOWLEDGE -> DRIVER -> finalize_features -> FEATURES_OK) happens in the noirq phase. The subsequent normal-phase virtio_device_restore() detects this and skips the redundant re-initialization. Patch breakdown =============== Patch 1 is a preparatory refactoring with no functional change. Patches 2-3 add the core infrastructure. Patch 4 wires it up for virtio-mmio. 1. virtio: separate PM restore and reset_done paths Splits virtio_device_restore_priv() into independent virtio_device_restore() and virtio_device_reset_done() paths, using a shared virtio_device_reinit() helper. This is a pure refactoring to make the restore path independently extensible without complicating the boolean dispatch. 2. virtio_ring: export virtqueue_reinit_vring() for noirq restore Adds a thin exported wrapper around the existing static virtqueue_reset_split()/virtqueue_reset_packed() helpers, which reset vring indices and descriptor state in place without any memory allocation. This makes them usable from the transport's reset_vqs() callback in noirq context. 3. virtio: add noirq system sleep PM infrastructure Adds noirq-safe helpers (virtio_add_status_noirq, virtio_features_ok_noirq, virtio_reset_device_noirq, virtio_config_core_enable_noirq, virtio_device_ready_noirq) and the freeze_noirq/restore_noirq driver callbacks plus the config_ops->reset_vqs() transport hook. Modifies virtio_device_restore() to skip bring-up when restore_noirq already ran. 4. virtio-mmio: wire up noirq system sleep PM callbacks Implements vm_reset_vqs() which iterates existing virtqueues, reinitializes the vring state via virtqueue_reinit_vring(), and reprograms the MMIO queue registers. Adds virtio_mmio_freeze_noirq/virtio_mmio_restore_noirq and registers them via SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(). Testing ======= Build-tested with arm64 cross-compilation. (make ARCH=arm64 M=drivers/virtio) Runtime-tested on an internal virtio-mmio platform with virtio-clock, confirming that the clock device works well before other devices' normal restore() callbacks run. Questions for reviewers ======================= - virtio_config_core_enable_noirq() is currently a separate helper that duplicates the spin_lock logic, but it only differs from virtio_config_core_enable() by the locking context. Would it be better to merge them into a single function with irqsafe locking, or is the current separate-helper approach acceptable? - Should reset_vqs be a mandatory callback when restore_noirq is used, or should the core silently skip vq re-initialization when the vq list is empty (current behavior)? - Some functions, such as get_status() and set_status(), are also used in the noirq restore path. Therefore, it is assumed that drivers using noirq callbacks have implemented these functions usable in the noirq context. Is the current implementation acceptable? Or is it necessary to create a flag to indicate the driver supports them in the noirq context, or have a separate noirq callback? Changes ======= v2: virtio-mmio: wire up noirq system sleep PM callbacks - The code that was duplicated in vm_setup_vq() and vm_reset_vqs() has been moved to vm_active_vq() function. Sungho Bae (4): virtio: separate PM restore and reset_done paths virtio_ring: export virtqueue_reinit_vring() for noirq restore virtio: add noirq system sleep PM infrastructure virtio-mmio: wire up noirq system sleep PM callbacks drivers/virtio/virtio.c | 234 ++++++++++++++++++++++++++++++---- drivers/virtio/virtio_mmio.c | 131 +++++++++++++------ drivers/virtio/virtio_ring.c | 19 +++ include/linux/virtio.h | 8 ++ include/linux/virtio_config.h | 29 +++++ include/linux/virtio_ring.h | 3 + 6 files changed, 359 insertions(+), 65 deletions(-) -- 2.43.0