kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Bug 217379] New: Latency issues in irq_bypass_register_consumer
@ 2023-04-28  7:27 bugzilla-daemon
  2023-05-01 16:51 ` Sean Christopherson
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: bugzilla-daemon @ 2023-04-28  7:27 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=217379

            Bug ID: 217379
           Summary: Latency issues in irq_bypass_register_consumer
           Product: Virtualization
           Version: unspecified
          Hardware: Intel
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P3
         Component: kvm
          Assignee: virtualization_kvm@kernel-bugs.osdl.org
          Reporter: zhuangel570@gmail.com
        Regression: No

We found some latency issue in high-density and high-concurrency scenarios,
we are using cloud hypervisor as vmm for lightweight VM, using VIRTIO net and
block for VM. In our test, we got about 50ms to 100ms+ latency in creating VM
and register irqfd, after trace with funclatency (a tool of bcc-tools,
https://github.com/iovisor/bcc), we found the latency introduced by following
functions:

- irq_bypass_register_consumer introduce more than 60ms per VM.
  This function was called when registering irqfd, the function will register
  irqfd as consumer to irqbypass, wait for connecting from irqbypass producers,
  like VFIO or VDPA. In our test, one irqfd register will get about 4ms
  latency, and 5 devices with total 16 irqfd will introduce more than 60ms
  latency.

Here is a simple case, which can emulate the latency issue (the real latency
is lager). The case create 800 VM as background do nothing, then repeatedly
create 20 VM then destroy them after 400ms, every VM will do simple thing,
create in kernel irq chip, and register 15 riqfd (emulate 5 devices and every
device has 3 irqfd), just trace the "irq_bypass_register_consumer" latency, you
will reproduce such kind latency issue. Here is a trace log on Xeon(R) Platinum
8255C server (96C, 2 sockets) with linux 6.2.20.

Reproduce Case
https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/kvm_irqfd_fork.c
Reproduce log
https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/test.log

To fix these latencies, I didn't have a graceful method, just simple ideas
is give user a chance to avoid these latencies, like new flag to disable
irqbypass for each irqfd.

Any suggestion to fix the issue if welcomed.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Bug 217379] New: Latency issues in irq_bypass_register_consumer
  2023-04-28  7:27 [Bug 217379] New: Latency issues in irq_bypass_register_consumer bugzilla-daemon
@ 2023-05-01 16:51 ` Sean Christopherson
  2023-07-17 11:58   ` Like Xu
  2023-05-01 16:51 ` [Bug 217379] " bugzilla-daemon
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2023-05-01 16:51 UTC (permalink / raw)
  To: bugzilla-daemon; +Cc: kvm

On Fri, Apr 28, 2023, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=217379
> 
>             Bug ID: 217379
>            Summary: Latency issues in irq_bypass_register_consumer
>            Product: Virtualization
>            Version: unspecified
>           Hardware: Intel
>                 OS: Linux
>             Status: NEW
>           Severity: normal
>           Priority: P3
>          Component: kvm
>           Assignee: virtualization_kvm@kernel-bugs.osdl.org
>           Reporter: zhuangel570@gmail.com
>         Regression: No
> 
> We found some latency issue in high-density and high-concurrency scenarios,
> we are using cloud hypervisor as vmm for lightweight VM, using VIRTIO net and
> block for VM. In our test, we got about 50ms to 100ms+ latency in creating VM
> and register irqfd, after trace with funclatency (a tool of bcc-tools,
> https://github.com/iovisor/bcc), we found the latency introduced by following
> functions:
> 
> - irq_bypass_register_consumer introduce more than 60ms per VM.
>   This function was called when registering irqfd, the function will register
>   irqfd as consumer to irqbypass, wait for connecting from irqbypass producers,
>   like VFIO or VDPA. In our test, one irqfd register will get about 4ms
>   latency, and 5 devices with total 16 irqfd will introduce more than 60ms
>   latency.
> 
> Here is a simple case, which can emulate the latency issue (the real latency
> is lager). The case create 800 VM as background do nothing, then repeatedly
> create 20 VM then destroy them after 400ms, every VM will do simple thing,
> create in kernel irq chip, and register 15 riqfd (emulate 5 devices and every
> device has 3 irqfd), just trace the "irq_bypass_register_consumer" latency, you
> will reproduce such kind latency issue. Here is a trace log on Xeon(R) Platinum
> 8255C server (96C, 2 sockets) with linux 6.2.20.
> 
> Reproduce Case
> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/kvm_irqfd_fork.c
> Reproduce log
> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/test.log
> 
> To fix these latencies, I didn't have a graceful method, just simple ideas
> is give user a chance to avoid these latencies, like new flag to disable
> irqbypass for each irqfd.
> 
> Any suggestion to fix the issue if welcomed.

Looking at the code, it's not surprising that irq_bypass_register_consumer() can
exhibit high latencies.  The producers and consumers are stored in simple linked
lists, and a single mutex is held while traversing the lists *and* connecting
a consumer to a producer (and vice versa).

There are two obvious optimizations that can be done to reduce latency in
irq_bypass_register_consumer():

   - Use a different data type to track the producers and consumers so that lookups
     don't require a linear walk.  AIUI, the "tokens" used to match producers and
     consumers are just kernel pointers, so I _think_ XArray would perform reasonably
     well.

   - Connect producers and consumers outside of a global mutex.

Unfortunately, because .add_producer() and .add_consumer() can fail, and because
connections can be established by adding a consumer _or_ a producer, getting the
locking right without a global mutex is quite difficult.  It's certainly doable
to move the (dis)connect logic out of a global lock, but it's going to require a
dedicated effort, i.e. not something that can be sketched out in a few minutes
(I played around with the code for the better part of an hour trying to do just
that and kept running into edge case race conditions).

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Bug 217379] Latency issues in irq_bypass_register_consumer
  2023-04-28  7:27 [Bug 217379] New: Latency issues in irq_bypass_register_consumer bugzilla-daemon
  2023-05-01 16:51 ` Sean Christopherson
@ 2023-05-01 16:51 ` bugzilla-daemon
  2023-05-11  9:59 ` bugzilla-daemon
  2025-04-04 21:26 ` bugzilla-daemon
  3 siblings, 0 replies; 9+ messages in thread
From: bugzilla-daemon @ 2023-05-01 16:51 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=217379

--- Comment #1 from Sean Christopherson (seanjc@google.com) ---
On Fri, Apr 28, 2023, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=217379
> 
>             Bug ID: 217379
>            Summary: Latency issues in irq_bypass_register_consumer
>            Product: Virtualization
>            Version: unspecified
>           Hardware: Intel
>                 OS: Linux
>             Status: NEW
>           Severity: normal
>           Priority: P3
>          Component: kvm
>           Assignee: virtualization_kvm@kernel-bugs.osdl.org
>           Reporter: zhuangel570@gmail.com
>         Regression: No
> 
> We found some latency issue in high-density and high-concurrency scenarios,
> we are using cloud hypervisor as vmm for lightweight VM, using VIRTIO net and
> block for VM. In our test, we got about 50ms to 100ms+ latency in creating VM
> and register irqfd, after trace with funclatency (a tool of bcc-tools,
> https://github.com/iovisor/bcc), we found the latency introduced by following
> functions:
> 
> - irq_bypass_register_consumer introduce more than 60ms per VM.
>   This function was called when registering irqfd, the function will register
>   irqfd as consumer to irqbypass, wait for connecting from irqbypass
>   producers,
>   like VFIO or VDPA. In our test, one irqfd register will get about 4ms
>   latency, and 5 devices with total 16 irqfd will introduce more than 60ms
>   latency.
> 
> Here is a simple case, which can emulate the latency issue (the real latency
> is lager). The case create 800 VM as background do nothing, then repeatedly
> create 20 VM then destroy them after 400ms, every VM will do simple thing,
> create in kernel irq chip, and register 15 riqfd (emulate 5 devices and every
> device has 3 irqfd), just trace the "irq_bypass_register_consumer" latency,
> you
> will reproduce such kind latency issue. Here is a trace log on Xeon(R)
> Platinum
> 8255C server (96C, 2 sockets) with linux 6.2.20.
> 
> Reproduce Case
>
> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/kvm_irqfd_fork.c
> Reproduce log
> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/test.log
> 
> To fix these latencies, I didn't have a graceful method, just simple ideas
> is give user a chance to avoid these latencies, like new flag to disable
> irqbypass for each irqfd.
> 
> Any suggestion to fix the issue if welcomed.

Looking at the code, it's not surprising that irq_bypass_register_consumer()
can
exhibit high latencies.  The producers and consumers are stored in simple
linked
lists, and a single mutex is held while traversing the lists *and* connecting
a consumer to a producer (and vice versa).

There are two obvious optimizations that can be done to reduce latency in
irq_bypass_register_consumer():

   - Use a different data type to track the producers and consumers so that
lookups
     don't require a linear walk.  AIUI, the "tokens" used to match producers
and
     consumers are just kernel pointers, so I _think_ XArray would perform
reasonably
     well.

   - Connect producers and consumers outside of a global mutex.

Unfortunately, because .add_producer() and .add_consumer() can fail, and
because
connections can be established by adding a consumer _or_ a producer, getting
the
locking right without a global mutex is quite difficult.  It's certainly doable
to move the (dis)connect logic out of a global lock, but it's going to require
a
dedicated effort, i.e. not something that can be sketched out in a few minutes
(I played around with the code for the better part of an hour trying to do just
that and kept running into edge case race conditions).

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Bug 217379] Latency issues in irq_bypass_register_consumer
  2023-04-28  7:27 [Bug 217379] New: Latency issues in irq_bypass_register_consumer bugzilla-daemon
  2023-05-01 16:51 ` Sean Christopherson
  2023-05-01 16:51 ` [Bug 217379] " bugzilla-daemon
@ 2023-05-11  9:59 ` bugzilla-daemon
  2025-04-04 21:26 ` bugzilla-daemon
  3 siblings, 0 replies; 9+ messages in thread
From: bugzilla-daemon @ 2023-05-11  9:59 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=217379

--- Comment #2 from zhuangel (zhuangel570@gmail.com) ---
Thanks for the suggestion, Sean.

Before we had a complete optimize for irq_bypass, do you think such kind of fix
is acceptable. We should provide VMM with the ability to turn off irq_bypass
feature for device not need irq_bypass:


diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 737318b1c1d9..a7a018ce954a 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1292,7 +1292,7 @@ struct kvm_xen_hvm_config {
 };
 #endif

-#define KVM_IRQFD_FLAG_DEASSIGN (1 << 0)
+#define KVM_IRQFD_FLAG_DEASSIGN                (1 << 0)
 /*
  * Available with KVM_CAP_IRQFD_RESAMPLE
  *
@@ -1300,7 +1300,8 @@ struct kvm_xen_hvm_config {
  * the irqfd to operate in resampling mode for level triggered interrupt
  * emulation.  See Documentation/virt/kvm/api.rst.
  */
-#define KVM_IRQFD_FLAG_RESAMPLE (1 << 1)
+#define KVM_IRQFD_FLAG_RESAMPLE                (1 << 1)
+#define KVM_IRQFD_FLAG_NO_BYPASS       (1 << 2)

 struct kvm_irqfd {
        __u32 fd;
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index b0af834ffa95..90fa203d7ef3 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -425,7 +425,7 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
                schedule_work(&irqfd->inject);

 #ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
-       if (kvm_arch_has_irq_bypass()) {
+       if (!(args->flags & KVM_IRQFD_FLAG_NO_BYPASS) &&
kvm_arch_has_irq_bypass()) {
                irqfd->consumer.token = (void *)irqfd->eventfd;
                irqfd->consumer.add_producer =
kvm_arch_irq_bypass_add_producer;
                irqfd->consumer.del_producer =
kvm_arch_irq_bypass_del_producer;
@@ -587,7 +587,8 @@ kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd *args)
 int
 kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args)
 {
-       if (args->flags & ~(KVM_IRQFD_FLAG_DEASSIGN | KVM_IRQFD_FLAG_RESAMPLE))
+       if (args->flags & ~(KVM_IRQFD_FLAG_DEASSIGN | KVM_IRQFD_FLAG_RESAMPLE
+                               | KVM_IRQFD_FLAG_NO_BYPASS))
                return -EINVAL;

        if (args->flags & KVM_IRQFD_FLAG_DEASSIGN)
--
2.31.1

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Bug 217379] New: Latency issues in irq_bypass_register_consumer
  2023-05-01 16:51 ` Sean Christopherson
@ 2023-07-17 11:58   ` Like Xu
  2023-07-17 15:25     ` Paolo Bonzini
  0 siblings, 1 reply; 9+ messages in thread
From: Like Xu @ 2023-07-17 11:58 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm,
	Paolo Bonzini - Distinguished Engineer (kernel-recipes.org) (KVM HoF),
	Alex Williamson, Red Hat

On 2/5/2023 12:51 am, Sean Christopherson wrote:
> On Fri, Apr 28, 2023, bugzilla-daemon@kernel.org wrote:
>> https://bugzilla.kernel.org/show_bug.cgi?id=217379
>>
>>              Bug ID: 217379
>>             Summary: Latency issues in irq_bypass_register_consumer
>>             Product: Virtualization
>>             Version: unspecified
>>            Hardware: Intel
>>                  OS: Linux
>>              Status: NEW
>>            Severity: normal
>>            Priority: P3
>>           Component: kvm
>>            Assignee: virtualization_kvm@kernel-bugs.osdl.org
>>            Reporter: zhuangel570@gmail.com
>>          Regression: No
>>
>> We found some latency issue in high-density and high-concurrency scenarios,
>> we are using cloud hypervisor as vmm for lightweight VM, using VIRTIO net and
>> block for VM. In our test, we got about 50ms to 100ms+ latency in creating VM
>> and register irqfd, after trace with funclatency (a tool of bcc-tools,
>> https://github.com/iovisor/bcc), we found the latency introduced by following
>> functions:
>>
>> - irq_bypass_register_consumer introduce more than 60ms per VM.
>>    This function was called when registering irqfd, the function will register
>>    irqfd as consumer to irqbypass, wait for connecting from irqbypass producers,
>>    like VFIO or VDPA. In our test, one irqfd register will get about 4ms
>>    latency, and 5 devices with total 16 irqfd will introduce more than 60ms
>>    latency.
>>
>> Here is a simple case, which can emulate the latency issue (the real latency
>> is lager). The case create 800 VM as background do nothing, then repeatedly
>> create 20 VM then destroy them after 400ms, every VM will do simple thing,
>> create in kernel irq chip, and register 15 riqfd (emulate 5 devices and every
>> device has 3 irqfd), just trace the "irq_bypass_register_consumer" latency, you
>> will reproduce such kind latency issue. Here is a trace log on Xeon(R) Platinum
>> 8255C server (96C, 2 sockets) with linux 6.2.20.
>>
>> Reproduce Case
>> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/kvm_irqfd_fork.c
>> Reproduce log
>> https://github.com/zhuangel/misc/blob/main/test/kvm_irqfd_fork/test.log
>>
>> To fix these latencies, I didn't have a graceful method, just simple ideas
>> is give user a chance to avoid these latencies, like new flag to disable
>> irqbypass for each irqfd.
>>
>> Any suggestion to fix the issue if welcomed.
> 
> Looking at the code, it's not surprising that irq_bypass_register_consumer() can
> exhibit high latencies.  The producers and consumers are stored in simple linked
> lists, and a single mutex is held while traversing the lists *and* connecting
> a consumer to a producer (and vice versa).
> 
> There are two obvious optimizations that can be done to reduce latency in
> irq_bypass_register_consumer():
> 
>     - Use a different data type to track the producers and consumers so that lookups
>       don't require a linear walk.  AIUI, the "tokens" used to match producers and
>       consumers are just kernel pointers, so I _think_ XArray would perform reasonably
>       well.
My measurements show that there is little performance gain from optimizing lookups.

> 
>     - Connect producers and consumers outside of a global mutex.

In usage scenarios where a large number of VMs are created, it is very awful to 
have races
on this global mutex, especially when users on different NUMA nodes are concurrently
walking this critical path.

Wait time to acquire this lock (on 2.70GHz ICX):
- avg = 117.855314 ms
- min = 20 ns
- max = 11428.340.858 ms

Before we optimize this path using rewrites, could we first adopt a conservative 
approach:

- introduce the KVM_IRQFD_FLAG_NO_BYPASS [*], or
- introduce module_param_cb(kvm_irq_bypass ...) (644abbb254b1), or
- introduce extra Kconfig knob for "select IRQ_BYPASS_MANAGER";

[*] 
https://lore.kernel.org/kvm/bug-217379-28872-KU8tTDkhtT@https.bugzilla.kernel.org%2F/

Any better move ?

> 
> Unfortunately, because .add_producer() and .add_consumer() can fail, and because
> connections can be established by adding a consumer _or_ a producer, getting the
> locking right without a global mutex is quite difficult.  It's certainly doable
> to move the (dis)connect logic out of a global lock, but it's going to require a
> dedicated effort, i.e. not something that can be sketched out in a few minutes
> (I played around with the code for the better part of an hour trying to do just
> that and kept running into edge case race conditions).
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Bug 217379] New: Latency issues in irq_bypass_register_consumer
  2023-07-17 11:58   ` Like Xu
@ 2023-07-17 15:25     ` Paolo Bonzini
  2023-07-18  3:43       ` Like Xu
  0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2023-07-17 15:25 UTC (permalink / raw)
  To: Like Xu; +Cc: Sean Christopherson, kvm, Alex Williamson, Red Hat

On Mon, Jul 17, 2023 at 1:58 PM Like Xu <like.xu.linux@gmail.com> wrote:
> >     - Use a different data type to track the producers and consumers so that lookups
> >       don't require a linear walk.  AIUI, the "tokens" used to match producers and
> >       consumers are just kernel pointers, so I _think_ XArray would perform reasonably
> >       well.
>
> My measurements show that there is little performance gain from optimizing lookups.

How did you test this?

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Bug 217379] New: Latency issues in irq_bypass_register_consumer
  2023-07-17 15:25     ` Paolo Bonzini
@ 2023-07-18  3:43       ` Like Xu
  2023-07-18  9:51         ` Paolo Bonzini
  0 siblings, 1 reply; 9+ messages in thread
From: Like Xu @ 2023-07-18  3:43 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Sean Christopherson, kvm, Alex Williamson, Red Hat

On 17/7/2023 11:25 pm, Paolo Bonzini wrote:
> On Mon, Jul 17, 2023 at 1:58 PM Like Xu <like.xu.linux@gmail.com> wrote:
>>>      - Use a different data type to track the producers and consumers so that lookups
>>>        don't require a linear walk.  AIUI, the "tokens" used to match producers and
>>>        consumers are just kernel pointers, so I _think_ XArray would perform reasonably
>>>        well.
>>
>> My measurements show that there is little performance gain from optimizing lookups.
> 
> How did you test this?
> 
> Paolo
> 

First of all, I agree that the use of linear lookups here is certainly not
optimal, and meanwhile the point is that it's not the culprit for the long
delay of irq_bypass_register_consumer().

Based on the user-supplied kvm_irqfd_fork load, we note that this is a test
scenario where there are no producers and the number of consumer is growing
linearly, and we note that the time delay [*] for two list_for_each_entry()
walks (w/o xArray proposal) is:

- avg = 444773 ns
- min = 44 ns
- max = 1865008 ns

[*] calculate sched_clock() delta on 2.70GHz ICX

Compare this with the wait time delay on mutex_lock(&lock):

- avg = 117.855314 ms
- min = 20 ns
- max = 11428.340858 ms

It's fair to say that optimizing the lock bottleneck has greater
performance gain, right?

Please let me know what ideas you have to move this forward.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Bug 217379] New: Latency issues in irq_bypass_register_consumer
  2023-07-18  3:43       ` Like Xu
@ 2023-07-18  9:51         ` Paolo Bonzini
  0 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2023-07-18  9:51 UTC (permalink / raw)
  To: Like Xu; +Cc: Sean Christopherson, kvm, Alex Williamson, Red Hat

On Tue, Jul 18, 2023 at 5:43 AM Like Xu <like.xu.linux@gmail.com> wrote:
>
> On 17/7/2023 11:25 pm, Paolo Bonzini wrote:
> > On Mon, Jul 17, 2023 at 1:58 PM Like Xu <like.xu.linux@gmail.com> wrote:
> >>>      - Use a different data type to track the producers and consumers so that lookups
> >>>        don't require a linear walk.  AIUI, the "tokens" used to match producers and
> >>>        consumers are just kernel pointers, so I _think_ XArray would perform reasonably
> >>>        well.
> >>
> >> My measurements show that there is little performance gain from optimizing lookups.
>
> First of all, I agree that the use of linear lookups here is certainly not
> optimal, and meanwhile the point is that it's not the culprit for the long
> delay of irq_bypass_register_consumer().
>
> Based on the user-supplied kvm_irqfd_fork load, we note that this is a test
> scenario where there are no producers and the number of consumer is growing
> linearly, and we note that the time delay [*] for two list_for_each_entry()
> walks (w/o xArray proposal) is:

This scenario is still subject to quadratic complexity in the first
foreach loop:

        list_for_each_entry(tmp, &consumers, node) {
                if (tmp->token == consumer->token || tmp == consumer) {
                        ret = -EBUSY;
                        goto out_err;
                }
        }

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Bug 217379] Latency issues in irq_bypass_register_consumer
  2023-04-28  7:27 [Bug 217379] New: Latency issues in irq_bypass_register_consumer bugzilla-daemon
                   ` (2 preceding siblings ...)
  2023-05-11  9:59 ` bugzilla-daemon
@ 2025-04-04 21:26 ` bugzilla-daemon
  3 siblings, 0 replies; 9+ messages in thread
From: bugzilla-daemon @ 2025-04-04 21:26 UTC (permalink / raw)
  To: kvm

https://bugzilla.kernel.org/show_bug.cgi?id=217379

Sean Christopherson (seanjc@google.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |seanjc@google.com

--- Comment #3 from Sean Christopherson (seanjc@google.com) ---
Huh.  Rereading this almost two years later, I realized I completely missed
that (a) your setup wasn't actually using irqbypass and (b) you suggested
giving userspace a way to disable irqbypass in KVM when it couldn't possibly be
utilized.

I've proposed adding a module param for other reasons[1].  If that doesn't
suffice, KVM could also provide KVM_IRQFD_FLAG_NO_IRQBYPASS (which would also
be useful for other reasons).

As for the ugly latency problems, I posted a new patch to use xarray to
mitigate the issue[2].

[1] https://lore.kernel.org/all/20250401161804.842968-1-seanjc@google.com
[2] https://lore.kernel.org/all/20250404211449.1443336-1-seanjc@google.com

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-04-04 21:26 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-28  7:27 [Bug 217379] New: Latency issues in irq_bypass_register_consumer bugzilla-daemon
2023-05-01 16:51 ` Sean Christopherson
2023-07-17 11:58   ` Like Xu
2023-07-17 15:25     ` Paolo Bonzini
2023-07-18  3:43       ` Like Xu
2023-07-18  9:51         ` Paolo Bonzini
2023-05-01 16:51 ` [Bug 217379] " bugzilla-daemon
2023-05-11  9:59 ` bugzilla-daemon
2025-04-04 21:26 ` bugzilla-daemon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).