* [PATCH v2 0/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic Fast Inject
@ 2026-03-26 1:42 Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Douglas Freimuth @ 2026-03-26 1:42 UTC (permalink / raw)
To: borntraeger, imbrenda, frankja, david, hca, gor, agordeev, svens,
kvm, linux-s390, linux-kernel
Cc: mjrosato, freimuth
S390 needs this series of three patches in order to enable a non-blocking
path for irqfd injection on s390 via kvm_arch_set_irq_inatomic(). Before
these changes, kvm_arch_set_irq_inatomic() would just return -EWOULDBLOCK
and place all interrupts on the global work queue, which must subsequently
be processed by a different thread. This series of patches implements an
s390 version of inatomic and is relevant to virtio-blk and virtio-net and
was tested against virtio-pci and virtio-ccw.
The inatomic fast path cannot lose control since it is running with
interrupts disabled. This meant making the following changes that exist on
the slow path today. First, the adapter_indicators page needs to be mapped
since it is accessed with interrupts disabled, so we added map/unmap
functions. Second, access to shared resources between the fast and slow
paths needed to be changed from mutex and semaphores to spin_lock's.
Finally, the memory allocation on the slow path utilizes GFP_KERNEL_ACCOUNT
but we had to implement the fast path with GFP_ATOMIC allocation. Each of
these enhancements were required to prevent blocking on the fast inject
path.
Statistical counters have been added to enable analysis of irq injection on
the fast path and slow path including io_390_inatomic, io_flic_inject_airq,
io_set_adapter_int and io_390_inatomic_adapter_masked. And counters have
been added to analyze map/unmap of the adapter_indicator
pages in non-Secure Execution environments and to track fencing of Fast
Inject in Secure Execution environments. In order to take advantage of this
kernel series with virtio-pci, a QEMU that includes the
's390x/pci: set kvm_msi_via_irqfd_allowed' fix is needed. Additionally,
the guest xml needs a thread pool and threads explicitly assigned per disk
device using the common way of defining threads for disks.
Patch 1 enables map/unmap of adapter indicator pages but for Secure
Execution environments it avoids the long term mapping.
v1->v2: Added kvm_s390_unmap_all_adapters_pv() for case KVM_PV_ENABLE
v1->v2: Changed semaphore to spin_lock in Patch 1 and Patch 2
v1->v2: Fixed a build break in Patch 2
v1->v2: Added KVM & S390 tags to sub files per Sean
v1->v2: Rewrote commit messages to more clearly describe each patch
Douglas Freimuth (3):
Add map/unmap ioctl and clean mappings post-guest
Enable adapter_indicators_set to use mapped pages
Introducing kvm_arch_set_irq_inatomic fast inject
arch/s390/include/asm/kvm_host.h | 11 +-
arch/s390/kvm/interrupt.c | 366 +++++++++++++++++++++++++------
arch/s390/kvm/kvm-s390.c | 44 +++-
arch/s390/kvm/kvm-s390.h | 3 +-
4 files changed, 356 insertions(+), 68 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest
2026-03-26 1:42 [PATCH v2 0/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic Fast Inject Douglas Freimuth
@ 2026-03-26 1:42 ` Douglas Freimuth
2026-03-26 11:46 ` Christian Borntraeger
2026-03-26 15:56 ` Janosch Frank
2026-03-26 1:42 ` [PATCH v2 2/3] KVM: s390: Enable adapter_indicators_set to use mapped pages Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 3/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic fast inject Douglas Freimuth
2 siblings, 2 replies; 6+ messages in thread
From: Douglas Freimuth @ 2026-03-26 1:42 UTC (permalink / raw)
To: borntraeger, imbrenda, frankja, david, hca, gor, agordeev, svens,
kvm, linux-s390, linux-kernel
Cc: mjrosato, freimuth
S390 needs map/unmap ioctls, which map the adapter set
indicator pages, so the pages can be accessed when interrupts are
disabled. The mappings are cleaned up when the guest is removed.
Map/Unmap ioctls are fenced in order to avoid the longterm pinning
in Secure Execution environments. In Secure Execution
environments the path of execution available before this patch is followed.
Statistical counters to count map/unmap functions for adapter indicator
pages are added. The counters can be used to analyze
map/unmap functions in non-Secure Execution environments and similarly
can be used to analyze Secure Execution environments where the counters
will not be incremented as the adapter indicator pages are not mapped.
Signed-off-by: Douglas Freimuth <freimuth@linux.ibm.com>
---
arch/s390/include/asm/kvm_host.h | 5 ++
arch/s390/kvm/interrupt.c | 145 +++++++++++++++++++++++++------
arch/s390/kvm/kvm-s390.c | 20 +++++
3 files changed, 144 insertions(+), 26 deletions(-)
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 3039c88daa63..a078420751a1 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -448,6 +448,8 @@ struct kvm_vcpu_arch {
struct kvm_vm_stat {
struct kvm_vm_stat_generic generic;
u64 inject_io;
+ u64 io_390_adapter_map;
+ u64 io_390_adapter_unmap;
u64 inject_float_mchk;
u64 inject_pfault_done;
u64 inject_service_signal;
@@ -479,6 +481,9 @@ struct s390_io_adapter {
bool masked;
bool swap;
bool suppressible;
+ spinlock_t maps_lock;
+ struct list_head maps;
+ unsigned int nr_maps;
};
#define MAX_S390_IO_ADAPTERS ((MAX_ISC + 1) * 8)
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 7cb8ce833b62..fce170693ff3 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2426,6 +2426,9 @@ static int register_io_adapter(struct kvm_device *dev,
if (!adapter)
return -ENOMEM;
+ INIT_LIST_HEAD(&adapter->maps);
+ spin_lock_init(&adapter->maps_lock);
+ adapter->nr_maps = 0;
adapter->id = adapter_info.id;
adapter->isc = adapter_info.isc;
adapter->maskable = adapter_info.maskable;
@@ -2450,12 +2453,104 @@ int kvm_s390_mask_adapter(struct kvm *kvm, unsigned int id, bool masked)
return ret;
}
+static struct page *get_map_page(struct kvm *kvm, u64 uaddr)
+{
+ struct mm_struct *mm = kvm->mm;
+ struct page *page = NULL;
+ int locked = 1;
+
+ if (mmget_not_zero(mm)) {
+ mmap_read_lock(mm);
+ get_user_pages_remote(mm, uaddr, 1, FOLL_WRITE,
+ &page, &locked);
+ if (locked)
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
+
+ return page;
+}
+
+static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
+{
+ struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
+ struct s390_map_info *map;
+ unsigned long flags;
+ int ret;
+
+ if (!adapter || !addr)
+ return -EINVAL;
+
+ map = kzalloc_obj(*map, GFP_KERNEL);
+ if (!map)
+ return -ENOMEM;
+
+ map->page = get_map_page(kvm, addr);
+ if (!map->page) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&map->list);
+ map->guest_addr = addr;
+ map->addr = addr;
+ spin_lock_irqsave(&adapter->maps_lock, flags);
+ if (adapter->nr_maps++ < MAX_S390_ADAPTER_MAPS) {
+ list_add_tail(&map->list, &adapter->maps);
+ ret = 0;
+ } else {
+ put_page(map->page);
+ ret = -EINVAL;
+ }
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
+out:
+ if (ret)
+ kfree(map);
+ return ret;
+}
+
+static int kvm_s390_adapter_unmap(struct kvm *kvm, unsigned int id, __u64 addr)
+{
+ struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
+ struct s390_map_info *map, *tmp;
+ unsigned long flags;
+ int found = 0;
+
+ if (!adapter || !addr)
+ return -EINVAL;
+
+ spin_lock_irqsave(&adapter->maps_lock, flags);
+ list_for_each_entry_safe(map, tmp, &adapter->maps, list) {
+ if (map->guest_addr == addr) {
+ found = 1;
+ adapter->nr_maps--;
+ list_del(&map->list);
+ put_page(map->page);
+ kfree(map);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
+
+ return found ? 0 : -ENOENT;
+}
+
void kvm_s390_destroy_adapters(struct kvm *kvm)
{
int i;
+ struct s390_map_info *map, *tmp;
- for (i = 0; i < MAX_S390_IO_ADAPTERS; i++)
+ for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
+ if (!kvm->arch.adapters[i])
+ continue;
+ list_for_each_entry_safe(map, tmp,
+ &kvm->arch.adapters[i]->maps, list) {
+ list_del(&map->list);
+ put_page(map->page);
+ kfree(map);
+ }
kfree(kvm->arch.adapters[i]);
+ }
}
static int modify_io_adapter(struct kvm_device *dev,
@@ -2463,7 +2558,8 @@ static int modify_io_adapter(struct kvm_device *dev,
{
struct kvm_s390_io_adapter_req req;
struct s390_io_adapter *adapter;
- int ret;
+ __u64 host_addr;
+ int ret, idx;
if (copy_from_user(&req, (void __user *)attr->addr, sizeof(req)))
return -EFAULT;
@@ -2477,14 +2573,29 @@ static int modify_io_adapter(struct kvm_device *dev,
if (ret > 0)
ret = 0;
break;
- /*
- * The following operations are no longer needed and therefore no-ops.
- * The gpa to hva translation is done when an IRQ route is set up. The
- * set_irq code uses get_user_pages_remote() to do the actual write.
- */
case KVM_S390_IO_ADAPTER_MAP:
case KVM_S390_IO_ADAPTER_UNMAP:
- ret = 0;
+ /* If in Secure Execution mode do not long term pin. */
+ mutex_lock(&dev->kvm->lock);
+ if (kvm_s390_pv_is_protected(dev->kvm)) {
+ mutex_unlock(&dev->kvm->lock);
+ return 0;
+ }
+ mutex_unlock(&dev->kvm->lock);
+ idx = srcu_read_lock(&dev->kvm->srcu);
+ host_addr = gpa_to_hva(dev->kvm, req.addr);
+ if (kvm_is_error_hva(host_addr)) {
+ srcu_read_unlock(&dev->kvm->srcu, idx);
+ return -EFAULT;
+ }
+ srcu_read_unlock(&dev->kvm->srcu, idx);
+ if (req.type == KVM_S390_IO_ADAPTER_MAP) {
+ dev->kvm->stat.io_390_adapter_map++;
+ ret = kvm_s390_adapter_map(dev->kvm, req.id, host_addr);
+ } else {
+ dev->kvm->stat.io_390_adapter_unmap++;
+ ret = kvm_s390_adapter_unmap(dev->kvm, req.id, host_addr);
+ }
break;
default:
ret = -EINVAL;
@@ -2730,24 +2841,6 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
}
-static struct page *get_map_page(struct kvm *kvm, u64 uaddr)
-{
- struct mm_struct *mm = kvm->mm;
- struct page *page = NULL;
- int locked = 1;
-
- if (mmget_not_zero(mm)) {
- mmap_read_lock(mm);
- get_user_pages_remote(mm, uaddr, 1, FOLL_WRITE,
- &page, &locked);
- if (locked)
- mmap_read_unlock(mm);
- mmput(mm);
- }
-
- return page;
-}
-
static int adapter_indicators_set(struct kvm *kvm,
struct s390_io_adapter *adapter,
struct kvm_s390_adapter_int *adapter_int)
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index b2c01fa7b852..4b1820bcead1 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -68,6 +68,8 @@
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_COUNTER(VM, inject_io),
+ STATS_DESC_COUNTER(VM, io_390_adapter_map),
+ STATS_DESC_COUNTER(VM, io_390_adapter_unmap),
STATS_DESC_COUNTER(VM, inject_float_mchk),
STATS_DESC_COUNTER(VM, inject_pfault_done),
STATS_DESC_COUNTER(VM, inject_service_signal),
@@ -2491,6 +2493,23 @@ static int kvm_s390_pv_dmp(struct kvm *kvm, struct kvm_pv_cmd *cmd,
return r;
}
+static void kvm_s390_unmap_all_adapters_pv(struct kvm *kvm)
+{
+ int i;
+ struct s390_map_info *map, *tmp;
+
+ for (i = 0; i < MAX_S390_IO_ADAPTERS; i++) {
+ if (!kvm->arch.adapters[i])
+ continue;
+ list_for_each_entry_safe(map, tmp,
+ &kvm->arch.adapters[i]->maps, list) {
+ list_del(&map->list);
+ put_page(map->page);
+ kfree(map);
+ }
+ }
+}
+
static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
{
const bool need_lock = (cmd->cmd != KVM_PV_ASYNC_CLEANUP_PERFORM);
@@ -2503,6 +2522,7 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
switch (cmd->cmd) {
case KVM_PV_ENABLE: {
+ kvm_s390_unmap_all_adapters_pv(kvm);
r = -EINVAL;
if (kvm_s390_pv_is_protected(kvm))
break;
--
2.52.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/3] KVM: s390: Enable adapter_indicators_set to use mapped pages
2026-03-26 1:42 [PATCH v2 0/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic Fast Inject Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
@ 2026-03-26 1:42 ` Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 3/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic fast inject Douglas Freimuth
2 siblings, 0 replies; 6+ messages in thread
From: Douglas Freimuth @ 2026-03-26 1:42 UTC (permalink / raw)
To: borntraeger, imbrenda, frankja, david, hca, gor, agordeev, svens,
kvm, linux-s390, linux-kernel
Cc: mjrosato, freimuth
The S390 adapter_indicators_set function needs to be able to use mapped
pages so that worked can be processed,on a fast path when interrupts are
disabled. If adapter indicator pages are not mapped then local mapping is
done on a slow path as it is prior to this patch. For example, Secure
Execution environments will take the local mapping path as it does prior to
this patch.
Signed-off-by: Douglas Freimuth <freimuth@linux.ibm.com>
---
arch/s390/kvm/interrupt.c | 87 ++++++++++++++++++++++++++++-----------
1 file changed, 62 insertions(+), 25 deletions(-)
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index fce170693ff3..48d05a230416 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -2841,41 +2841,75 @@ static unsigned long get_ind_bit(__u64 addr, unsigned long bit_nr, bool swap)
return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit;
}
+static struct s390_map_info *get_map_info(struct s390_io_adapter *adapter,
+ u64 addr)
+{
+ struct s390_map_info *map;
+
+ if (!adapter)
+ return NULL;
+
+ list_for_each_entry(map, &adapter->maps, list) {
+ if (map->guest_addr == addr)
+ return map;
+ }
+ return NULL;
+}
+
static int adapter_indicators_set(struct kvm *kvm,
struct s390_io_adapter *adapter,
struct kvm_s390_adapter_int *adapter_int)
{
unsigned long bit;
int summary_set, idx;
- struct page *ind_page, *summary_page;
+ struct s390_map_info *ind_info, *summary_info;
void *map;
+ struct page *ind_page, *summary_page;
- ind_page = get_map_page(kvm, adapter_int->ind_addr);
- if (!ind_page)
- return -1;
- summary_page = get_map_page(kvm, adapter_int->summary_addr);
- if (!summary_page) {
- put_page(ind_page);
- return -1;
+ ind_info = get_map_info(adapter, adapter_int->ind_addr);
+ if (!ind_info) {
+ ind_page = get_map_page(kvm, adapter_int->ind_addr);
+ if (!ind_page)
+ return -1;
+ idx = srcu_read_lock(&kvm->srcu);
+ map = page_address(ind_page);
+ bit = get_ind_bit(adapter_int->ind_addr,
+ adapter_int->ind_offset, adapter->swap);
+ set_bit(bit, map);
+ mark_page_dirty(kvm, adapter_int->ind_gaddr >> PAGE_SHIFT);
+ set_page_dirty_lock(ind_page);
+ srcu_read_unlock(&kvm->srcu, idx);
+ } else {
+ map = page_address(ind_info->page);
+ bit = get_ind_bit(ind_info->addr, adapter_int->ind_offset, adapter->swap);
+ set_bit(bit, map);
+ }
+ summary_info = get_map_info(adapter, adapter_int->summary_addr);
+ if (!summary_info) {
+ summary_page = get_map_page(kvm, adapter_int->summary_addr);
+ if (!summary_page) {
+ put_page(ind_page);
+ return -1;
+ }
+ idx = srcu_read_lock(&kvm->srcu);
+ map = page_address(summary_page);
+ bit = get_ind_bit(adapter_int->summary_addr,
+ adapter_int->summary_offset, adapter->swap);
+ summary_set = test_and_set_bit(bit, map);
+ mark_page_dirty(kvm, adapter_int->summary_gaddr >> PAGE_SHIFT);
+ set_page_dirty_lock(summary_page);
+ srcu_read_unlock(&kvm->srcu, idx);
+ } else {
+ map = page_address(summary_info->page);
+ bit = get_ind_bit(summary_info->addr, adapter_int->summary_offset,
+ adapter->swap);
+ summary_set = test_and_set_bit(bit, map);
}
- idx = srcu_read_lock(&kvm->srcu);
- map = page_address(ind_page);
- bit = get_ind_bit(adapter_int->ind_addr,
- adapter_int->ind_offset, adapter->swap);
- set_bit(bit, map);
- mark_page_dirty(kvm, adapter_int->ind_gaddr >> PAGE_SHIFT);
- set_page_dirty_lock(ind_page);
- map = page_address(summary_page);
- bit = get_ind_bit(adapter_int->summary_addr,
- adapter_int->summary_offset, adapter->swap);
- summary_set = test_and_set_bit(bit, map);
- mark_page_dirty(kvm, adapter_int->summary_gaddr >> PAGE_SHIFT);
- set_page_dirty_lock(summary_page);
- srcu_read_unlock(&kvm->srcu, idx);
-
- put_page(ind_page);
- put_page(summary_page);
+ if (!ind_info)
+ put_page(ind_page);
+ if (!summary_info)
+ put_page(summary_page);
return summary_set ? 0 : 1;
}
@@ -2890,6 +2924,7 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
{
int ret;
struct s390_io_adapter *adapter;
+ unsigned long flags;
/* We're only interested in the 0->1 transition. */
if (!level)
@@ -2897,7 +2932,9 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
adapter = get_io_adapter(kvm, e->adapter.adapter_id);
if (!adapter)
return -1;
+ spin_lock_irqsave(&adapter->maps_lock, flags);
ret = adapter_indicators_set(kvm, adapter, &e->adapter);
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
if ((ret > 0) && !adapter->masked) {
ret = kvm_s390_inject_airq(kvm, adapter);
if (ret == 0)
--
2.52.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 3/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic fast inject
2026-03-26 1:42 [PATCH v2 0/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic Fast Inject Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 2/3] KVM: s390: Enable adapter_indicators_set to use mapped pages Douglas Freimuth
@ 2026-03-26 1:42 ` Douglas Freimuth
2 siblings, 0 replies; 6+ messages in thread
From: Douglas Freimuth @ 2026-03-26 1:42 UTC (permalink / raw)
To: borntraeger, imbrenda, frankja, david, hca, gor, agordeev, svens,
kvm, linux-s390, linux-kernel
Cc: mjrosato, freimuth
S390 needs a fast path for irq injection, and along those lines we
introduce kvm_arch_set_irq_inatomic. Instead of placing all interrupts on
the global work queue as it does today, this patch provides a fast path for
irq injection.
The inatomic fast path cannot lose control since it is running with
interrupts disabled. This meant making the following changes that exist on
the slow path today. First, the adapter_indicators page needs to be mapped
since it is accessed with interrupts disabled, so we added map/unmap
functions. Second, access to shared resources between the fast and slow
paths needed to be changed from mutex and semaphores to spin_lock's.
Finally, the memory allocation on the slow path utilizes GFP_KERNEL_ACCOUNT
but we had to implement the fast path with GFP_ATOMIC allocation. Each of
these enhancements were required to prevent blocking on the fast inject
path.
Fencing of Fast Inject in Secure Execution environments is enabled in the
patch series by not mapping adapter indicator pages. In Secure Execution
environments the path of execution available before this patch is followed.
Statistical counters have been added to enable analysis of irq injection on
the fast path and slow path including io_390_inatomic, io_flic_inject_airq,
io_set_adapter_int and io_390_inatomic_adapter_masked.
Signed-off-by: Douglas Freimuth <freimuth@linux.ibm.com>
---
arch/s390/include/asm/kvm_host.h | 6 +-
arch/s390/kvm/interrupt.c | 154 +++++++++++++++++++++++++++----
arch/s390/kvm/kvm-s390.c | 24 ++++-
arch/s390/kvm/kvm-s390.h | 3 +-
4 files changed, 160 insertions(+), 27 deletions(-)
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index a078420751a1..90b1a19074ce 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -359,7 +359,7 @@ struct kvm_s390_float_interrupt {
struct kvm_s390_mchk_info mchk;
struct kvm_s390_ext_info srv_signal;
int last_sleep_cpu;
- struct mutex ais_lock;
+ spinlock_t ais_lock;
u8 simm;
u8 nimm;
};
@@ -450,6 +450,10 @@ struct kvm_vm_stat {
u64 inject_io;
u64 io_390_adapter_map;
u64 io_390_adapter_unmap;
+ u64 io_390_inatomic;
+ u64 io_flic_inject_airq;
+ u64 io_set_adapter_int;
+ u64 io_390_inatomic_adapter_masked;
u64 inject_float_mchk;
u64 inject_pfault_done;
u64 inject_service_signal;
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 48d05a230416..d980b432b173 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -1963,15 +1963,10 @@ static int __inject_vm(struct kvm *kvm, struct kvm_s390_interrupt_info *inti)
}
int kvm_s390_inject_vm(struct kvm *kvm,
- struct kvm_s390_interrupt *s390int)
+ struct kvm_s390_interrupt *s390int, struct kvm_s390_interrupt_info *inti)
{
- struct kvm_s390_interrupt_info *inti;
int rc;
- inti = kzalloc_obj(*inti, GFP_KERNEL_ACCOUNT);
- if (!inti)
- return -ENOMEM;
-
inti->type = s390int->type;
switch (inti->type) {
case KVM_S390_INT_VIRTIO:
@@ -2284,6 +2279,7 @@ static int flic_ais_mode_get_all(struct kvm *kvm, struct kvm_device_attr *attr)
{
struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int;
struct kvm_s390_ais_all ais;
+ unsigned long flags;
if (attr->attr < sizeof(ais))
return -EINVAL;
@@ -2291,10 +2287,10 @@ static int flic_ais_mode_get_all(struct kvm *kvm, struct kvm_device_attr *attr)
if (!test_kvm_facility(kvm, 72))
return -EOPNOTSUPP;
- mutex_lock(&fi->ais_lock);
+ spin_lock_irqsave(&fi->ais_lock, flags);
ais.simm = fi->simm;
ais.nimm = fi->nimm;
- mutex_unlock(&fi->ais_lock);
+ spin_unlock_irqrestore(&fi->ais_lock, flags);
if (copy_to_user((void __user *)attr->addr, &ais, sizeof(ais)))
return -EFAULT;
@@ -2632,6 +2628,7 @@ static int modify_ais_mode(struct kvm *kvm, struct kvm_device_attr *attr)
struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int;
struct kvm_s390_ais_req req;
int ret = 0;
+ unsigned long flags;
if (!test_kvm_facility(kvm, 72))
return -EOPNOTSUPP;
@@ -2648,7 +2645,7 @@ static int modify_ais_mode(struct kvm *kvm, struct kvm_device_attr *attr)
2 : KVM_S390_AIS_MODE_SINGLE :
KVM_S390_AIS_MODE_ALL, req.mode);
- mutex_lock(&fi->ais_lock);
+ spin_lock_irqsave(&fi->ais_lock, flags);
switch (req.mode) {
case KVM_S390_AIS_MODE_ALL:
fi->simm &= ~AIS_MODE_MASK(req.isc);
@@ -2661,7 +2658,7 @@ static int modify_ais_mode(struct kvm *kvm, struct kvm_device_attr *attr)
default:
ret = -EINVAL;
}
- mutex_unlock(&fi->ais_lock);
+ spin_unlock_irqrestore(&fi->ais_lock, flags);
return ret;
}
@@ -2675,25 +2672,33 @@ static int kvm_s390_inject_airq(struct kvm *kvm,
.parm = 0,
.parm64 = isc_to_int_word(adapter->isc),
};
+ struct kvm_s390_interrupt_info *inti;
+ unsigned long flags;
+
int ret = 0;
+ inti = kzalloc_obj(*inti, GFP_KERNEL_ACCOUNT);
+ if (!inti)
+ return -ENOMEM;
+
if (!test_kvm_facility(kvm, 72) || !adapter->suppressible)
- return kvm_s390_inject_vm(kvm, &s390int);
+ return kvm_s390_inject_vm(kvm, &s390int, inti);
- mutex_lock(&fi->ais_lock);
+ spin_lock_irqsave(&fi->ais_lock, flags);
if (fi->nimm & AIS_MODE_MASK(adapter->isc)) {
trace_kvm_s390_airq_suppressed(adapter->id, adapter->isc);
+ kfree(inti);
goto out;
}
- ret = kvm_s390_inject_vm(kvm, &s390int);
+ ret = kvm_s390_inject_vm(kvm, &s390int, inti);
if (!ret && (fi->simm & AIS_MODE_MASK(adapter->isc))) {
fi->nimm |= AIS_MODE_MASK(adapter->isc);
trace_kvm_s390_modify_ais_mode(adapter->isc,
KVM_S390_AIS_MODE_SINGLE, 2);
}
out:
- mutex_unlock(&fi->ais_lock);
+ spin_unlock_irqrestore(&fi->ais_lock, flags);
return ret;
}
@@ -2702,6 +2707,8 @@ static int flic_inject_airq(struct kvm *kvm, struct kvm_device_attr *attr)
unsigned int id = attr->attr;
struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
+ kvm->stat.io_flic_inject_airq++;
+
if (!adapter)
return -EINVAL;
@@ -2712,6 +2719,7 @@ static int flic_ais_mode_set_all(struct kvm *kvm, struct kvm_device_attr *attr)
{
struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int;
struct kvm_s390_ais_all ais;
+ unsigned long flags;
if (!test_kvm_facility(kvm, 72))
return -EOPNOTSUPP;
@@ -2719,10 +2727,10 @@ static int flic_ais_mode_set_all(struct kvm *kvm, struct kvm_device_attr *attr)
if (copy_from_user(&ais, (void __user *)attr->addr, sizeof(ais)))
return -EFAULT;
- mutex_lock(&fi->ais_lock);
+ spin_lock_irqsave(&fi->ais_lock, flags);
fi->simm = ais.simm;
fi->nimm = ais.nimm;
- mutex_unlock(&fi->ais_lock);
+ spin_unlock_irqrestore(&fi->ais_lock, flags);
return 0;
}
@@ -2865,9 +2873,12 @@ static int adapter_indicators_set(struct kvm *kvm,
struct s390_map_info *ind_info, *summary_info;
void *map;
struct page *ind_page, *summary_page;
+ unsigned long flags;
+ spin_lock_irqsave(&adapter->maps_lock, flags);
ind_info = get_map_info(adapter, adapter_int->ind_addr);
if (!ind_info) {
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
ind_page = get_map_page(kvm, adapter_int->ind_addr);
if (!ind_page)
return -1;
@@ -2883,9 +2894,13 @@ static int adapter_indicators_set(struct kvm *kvm,
map = page_address(ind_info->page);
bit = get_ind_bit(ind_info->addr, adapter_int->ind_offset, adapter->swap);
set_bit(bit, map);
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
}
+
+ spin_lock_irqsave(&adapter->maps_lock, flags);
summary_info = get_map_info(adapter, adapter_int->summary_addr);
if (!summary_info) {
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
summary_page = get_map_page(kvm, adapter_int->summary_addr);
if (!summary_page) {
put_page(ind_page);
@@ -2904,6 +2919,7 @@ static int adapter_indicators_set(struct kvm *kvm,
bit = get_ind_bit(summary_info->addr, adapter_int->summary_offset,
adapter->swap);
summary_set = test_and_set_bit(bit, map);
+ spin_unlock_irqrestore(&adapter->maps_lock, flags);
}
if (!ind_info)
@@ -2913,6 +2929,37 @@ static int adapter_indicators_set(struct kvm *kvm,
return summary_set ? 0 : 1;
}
+static int adapter_indicators_set_fast(struct kvm *kvm,
+ struct s390_io_adapter *adapter,
+ struct kvm_s390_adapter_int *adapter_int)
+{
+ unsigned long bit;
+ int summary_set;
+ struct s390_map_info *ind_info, *summary_info;
+ void *map;
+
+ spin_lock(&adapter->maps_lock);
+ ind_info = get_map_info(adapter, adapter_int->ind_addr);
+ if (!ind_info) {
+ spin_unlock(&adapter->maps_lock);
+ return -EWOULDBLOCK;
+ }
+ map = page_address(ind_info->page);
+ bit = get_ind_bit(ind_info->addr, adapter_int->ind_offset, adapter->swap);
+ set_bit(bit, map);
+ summary_info = get_map_info(adapter, adapter_int->summary_addr);
+ if (!summary_info) {
+ spin_unlock(&adapter->maps_lock);
+ return -EWOULDBLOCK;
+ }
+ map = page_address(summary_info->page);
+ bit = get_ind_bit(summary_info->addr, adapter_int->summary_offset,
+ adapter->swap);
+ summary_set = test_and_set_bit(bit, map);
+ spin_unlock(&adapter->maps_lock);
+ return summary_set ? 0 : 1;
+}
+
/*
* < 0 - not injected due to error
* = 0 - coalesced, summary indicator already active
@@ -2924,7 +2971,8 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
{
int ret;
struct s390_io_adapter *adapter;
- unsigned long flags;
+
+ kvm->stat.io_set_adapter_int++;
/* We're only interested in the 0->1 transition. */
if (!level)
@@ -2932,9 +2980,7 @@ static int set_adapter_int(struct kvm_kernel_irq_routing_entry *e,
adapter = get_io_adapter(kvm, e->adapter.adapter_id);
if (!adapter)
return -1;
- spin_lock_irqsave(&adapter->maps_lock, flags);
ret = adapter_indicators_set(kvm, adapter, &e->adapter);
- spin_unlock_irqrestore(&adapter->maps_lock, flags);
if ((ret > 0) && !adapter->masked) {
ret = kvm_s390_inject_airq(kvm, adapter);
if (ret == 0)
@@ -2996,7 +3042,6 @@ int kvm_set_routing_entry(struct kvm *kvm,
int idx;
switch (ue->type) {
- /* we store the userspace addresses instead of the guest addresses */
case KVM_IRQ_ROUTING_S390_ADAPTER:
if (kvm_is_ucontrol(kvm))
return -EINVAL;
@@ -3587,3 +3632,72 @@ int __init kvm_s390_gib_init(u8 nisc)
out:
return rc;
}
+
+/*
+ * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
+ */
+int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
+ struct kvm *kvm, int irq_source_id, int level,
+ bool line_status)
+{
+ int ret;
+ struct s390_io_adapter *adapter;
+ struct kvm_s390_float_interrupt *fi = &kvm->arch.float_int;
+ struct kvm_s390_interrupt_info *inti;
+ struct kvm_s390_interrupt s390int = {
+ .type = KVM_S390_INT_IO(1, 0, 0, 0),
+ .parm = 0,
+ };
+
+ kvm->stat.io_390_inatomic++;
+
+ /* We're only interested in the 0->1 transition. */
+ if (!level)
+ return -EWOULDBLOCK;
+ if (e->type != KVM_IRQ_ROUTING_S390_ADAPTER)
+ return -EWOULDBLOCK;
+
+ adapter = get_io_adapter(kvm, e->adapter.adapter_id);
+ if (!adapter)
+ return -EWOULDBLOCK;
+
+ s390int.parm64 = isc_to_int_word(adapter->isc);
+ ret = adapter_indicators_set_fast(kvm, adapter, &e->adapter);
+ if (ret < 0)
+ return -EWOULDBLOCK;
+ if (!ret || adapter->masked) {
+ kvm->stat.io_390_inatomic_adapter_masked++;
+ return 0;
+ }
+
+ inti = kzalloc_obj(*inti, GFP_ATOMIC);
+ if (!inti)
+ return -EWOULDBLOCK;
+
+ if (!test_kvm_facility(kvm, 72) || !adapter->suppressible) {
+ ret = kvm_s390_inject_vm(kvm, &s390int, inti);
+ if (ret == 0)
+ return ret;
+ else
+ return -EWOULDBLOCK;
+ }
+
+ spin_lock(&fi->ais_lock);
+ if (fi->nimm & AIS_MODE_MASK(adapter->isc)) {
+ trace_kvm_s390_airq_suppressed(adapter->id, adapter->isc);
+ kfree(inti);
+ goto out;
+ }
+
+ ret = kvm_s390_inject_vm(kvm, &s390int, inti);
+ if (!ret && (fi->simm & AIS_MODE_MASK(adapter->isc))) {
+ fi->nimm |= AIS_MODE_MASK(adapter->isc);
+ trace_kvm_s390_modify_ais_mode(adapter->isc,
+ KVM_S390_AIS_MODE_SINGLE, 2);
+ }
+ goto out;
+
+out:
+ spin_unlock(&fi->ais_lock);
+ return 0;
+}
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 4b1820bcead1..c53e25bcfab4 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -70,6 +70,10 @@ const struct kvm_stats_desc kvm_vm_stats_desc[] = {
STATS_DESC_COUNTER(VM, inject_io),
STATS_DESC_COUNTER(VM, io_390_adapter_map),
STATS_DESC_COUNTER(VM, io_390_adapter_unmap),
+ STATS_DESC_COUNTER(VM, io_390_inatomic),
+ STATS_DESC_COUNTER(VM, io_flic_inject_airq),
+ STATS_DESC_COUNTER(VM, io_set_adapter_int),
+ STATS_DESC_COUNTER(VM, io_390_inatomic_adapter_masked),
STATS_DESC_COUNTER(VM, inject_float_mchk),
STATS_DESC_COUNTER(VM, inject_pfault_done),
STATS_DESC_COUNTER(VM, inject_service_signal),
@@ -2862,6 +2866,7 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
void __user *argp = (void __user *)arg;
struct kvm_device_attr attr;
int r;
+ struct kvm_s390_interrupt_info *inti;
switch (ioctl) {
case KVM_S390_INTERRUPT: {
@@ -2870,7 +2875,10 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
r = -EFAULT;
if (copy_from_user(&s390int, argp, sizeof(s390int)))
break;
- r = kvm_s390_inject_vm(kvm, &s390int);
+ inti = kzalloc_obj(*inti, GFP_KERNEL_ACCOUNT);
+ if (!inti)
+ return -ENOMEM;
+ r = kvm_s390_inject_vm(kvm, &s390int, inti);
break;
}
case KVM_CREATE_IRQCHIP: {
@@ -3268,7 +3276,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
mutex_unlock(&kvm->lock);
}
- mutex_init(&kvm->arch.float_int.ais_lock);
+ spin_lock_init(&kvm->arch.float_int.ais_lock);
spin_lock_init(&kvm->arch.float_int.lock);
for (i = 0; i < FIRQ_LIST_COUNT; i++)
INIT_LIST_HEAD(&kvm->arch.float_int.lists[i]);
@@ -4389,11 +4397,16 @@ int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clo
return 1;
}
-static void __kvm_inject_pfault_token(struct kvm_vcpu *vcpu, bool start_token,
- unsigned long token)
+static int __kvm_inject_pfault_token(struct kvm_vcpu *vcpu, bool start_token,
+ unsigned long token)
{
struct kvm_s390_interrupt inti;
struct kvm_s390_irq irq;
+ struct kvm_s390_interrupt_info *inti_mem;
+
+ inti_mem = kzalloc_obj(*inti_mem, GFP_KERNEL_ACCOUNT);
+ if (!inti_mem)
+ return -ENOMEM;
if (start_token) {
irq.u.ext.ext_params2 = token;
@@ -4402,8 +4415,9 @@ static void __kvm_inject_pfault_token(struct kvm_vcpu *vcpu, bool start_token,
} else {
inti.type = KVM_S390_INT_PFAULT_DONE;
inti.parm64 = token;
- WARN_ON_ONCE(kvm_s390_inject_vm(vcpu->kvm, &inti));
+ WARN_ON_ONCE(kvm_s390_inject_vm(vcpu->kvm, &inti, inti_mem));
}
+ return true;
}
bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index bf1d7798c1af..2f2da868a040 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -373,7 +373,8 @@ int __must_check kvm_s390_deliver_pending_interrupts(struct kvm_vcpu *vcpu);
void kvm_s390_clear_local_irqs(struct kvm_vcpu *vcpu);
void kvm_s390_clear_float_irqs(struct kvm *kvm);
int __must_check kvm_s390_inject_vm(struct kvm *kvm,
- struct kvm_s390_interrupt *s390int);
+ struct kvm_s390_interrupt *s390int,
+ struct kvm_s390_interrupt_info *inti);
int __must_check kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu,
struct kvm_s390_irq *irq);
static inline int kvm_s390_inject_prog_irq(struct kvm_vcpu *vcpu,
--
2.52.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
@ 2026-03-26 11:46 ` Christian Borntraeger
2026-03-26 15:56 ` Janosch Frank
1 sibling, 0 replies; 6+ messages in thread
From: Christian Borntraeger @ 2026-03-26 11:46 UTC (permalink / raw)
To: Douglas Freimuth, imbrenda, frankja, david, hca, gor, agordeev,
svens, kvm, linux-s390, linux-kernel
Cc: mjrosato
Am 26.03.26 um 02:42 schrieb Douglas Freimuth:
> +static int kvm_s390_adapter_map(struct kvm *kvm, unsigned int id, __u64 addr)
> +{
> + struct s390_io_adapter *adapter = get_io_adapter(kvm, id);
> + struct s390_map_info *map;
> + unsigned long flags;
> + int ret;
> +
> + if (!adapter || !addr)
> + return -EINVAL;
> +
> + map = kzalloc_obj(*map, GFP_KERNEL);
GFP_KERNEL_ACCOUNT certainly makes sense. Depending on other feedback can be
added when applying.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
2026-03-26 11:46 ` Christian Borntraeger
@ 2026-03-26 15:56 ` Janosch Frank
1 sibling, 0 replies; 6+ messages in thread
From: Janosch Frank @ 2026-03-26 15:56 UTC (permalink / raw)
To: Douglas Freimuth, borntraeger, imbrenda, david, hca, gor,
agordeev, svens, kvm, linux-s390, linux-kernel
Cc: mjrosato
On 3/26/26 02:42, Douglas Freimuth wrote:
> S390 needs map/unmap ioctls, which map the adapter set
> indicator pages, so the pages can be accessed when interrupts are
> disabled. The mappings are cleaned up when the guest is removed.
>
> Map/Unmap ioctls are fenced in order to avoid the longterm pinning
> in Secure Execution environments. In Secure Execution
> environments the path of execution available before this patch is followed.
>
> Statistical counters to count map/unmap functions for adapter indicator
> pages are added. The counters can be used to analyze
> map/unmap functions in non-Secure Execution environments and similarly
> can be used to analyze Secure Execution environments where the counters
> will not be incremented as the adapter indicator pages are not mapped.
>
> Signed-off-by: Douglas Freimuth <freimuth@linux.ibm.com>
> ---
Looks good, two nits below.
[...]
> if (ret > 0)
> ret = 0;
> break;
> - /*
> - * The following operations are no longer needed and therefore no-ops.
> - * The gpa to hva translation is done when an IRQ route is set up. The
> - * set_irq code uses get_user_pages_remote() to do the actual write.
> - */
> case KVM_S390_IO_ADAPTER_MAP:
> case KVM_S390_IO_ADAPTER_UNMAP:
> - ret = 0;
> + /* If in Secure Execution mode do not long term pin. */
> + mutex_lock(&dev->kvm->lock);
> + if (kvm_s390_pv_is_protected(dev->kvm)) {
> + mutex_unlock(&dev->kvm->lock);
> + return 0;
> + }
> + mutex_unlock(&dev->kvm->lock);
> + idx = srcu_read_lock(&dev->kvm->srcu);
> + host_addr = gpa_to_hva(dev->kvm, req.addr);
> + if (kvm_is_error_hva(host_addr)) {
> + srcu_read_unlock(&dev->kvm->srcu, idx);
> + return -EFAULT;
> + }
Alignment issue
[...]
> static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
> {
> const bool need_lock = (cmd->cmd != KVM_PV_ASYNC_CLEANUP_PERFORM);
> @@ -2503,6 +2522,7 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>
> switch (cmd->cmd) {
> case KVM_PV_ENABLE: {
> + kvm_s390_unmap_all_adapters_pv(kvm);
Shouldn't this be located after the check that's below?
> r = -EINVAL;
> if (kvm_s390_pv_is_protected(kvm))
> break;
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-03-26 15:57 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 1:42 [PATCH v2 0/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic Fast Inject Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 1/3] KVM: s390: Add map/unmap ioctl and clean mappings post-guest Douglas Freimuth
2026-03-26 11:46 ` Christian Borntraeger
2026-03-26 15:56 ` Janosch Frank
2026-03-26 1:42 ` [PATCH v2 2/3] KVM: s390: Enable adapter_indicators_set to use mapped pages Douglas Freimuth
2026-03-26 1:42 ` [PATCH v2 3/3] KVM: s390: Introducing kvm_arch_set_irq_inatomic fast inject Douglas Freimuth
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox