* Re: [PATCH] x86/irq: Plug vector setup race [not found] <draft-87ikjhrhhh.ffs@tglx> @ 2025-07-31 12:45 ` Thomas Gleixner 2025-08-01 14:56 ` Hogan Wang 0 siblings, 1 reply; 3+ messages in thread From: Thomas Gleixner @ 2025-07-31 12:45 UTC (permalink / raw) To: Hogan Wang, x86, dave.hansen, kvm, alex.williamson Cc: weidong.huang, yechuan, hogan.wang, wangxinxin.wang, jianjay.zhou, wangjie88, Marc Zyngier, LKML On Thu, Jul 24 2025 at 12:49, Thomas Gleixner wrote: Hogan! > Hogan reported a vector setup race, which overwrites the interrupt > descriptor in the per CPU vector array resulting in a disfunctional device. > > CPU0 CPU1 > interrupt is raised in APIC IRR > but not handled > free_irq() > per_cpu(vector_irq, CPU1)[vector] = VECTOR_SHUTDOWN; > > request_irq() common_interrupt() > d = this_cpu_read(vector_irq[vector]); > > per_cpu(vector_irq, CPU1)[vector] = desc; > > if (d == VECTOR_SHUTDOWN) > this_cpu_write(vector_irq[vector], VECTOR_UNUSED); > > free_irq() cannot observe the pending vector in the CPU1 APIC as there is > no way to query the remote CPUs APIC IRR. > > This requires that request_irq() uses the same vector/CPU as the one which > was freed, but this also can be triggered by a spurious interrupt. > > Prevent this by reevaluating vector_irq under the vector lock, which is > held by the interrupt activation code when vector_irq is updated. Does this fix your problem? Thanks, tglx ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86/irq: Plug vector setup race 2025-07-31 12:45 ` [PATCH] x86/irq: Plug vector setup race Thomas Gleixner @ 2025-08-01 14:56 ` Hogan Wang 2025-08-02 11:54 ` Thomas Gleixner 0 siblings, 1 reply; 3+ messages in thread From: Hogan Wang @ 2025-08-01 14:56 UTC (permalink / raw) To: tglx, x86, dave.hansen, kvm, alex.williamson Cc: weidong.huang, yechuan, hogan.wang, wangxinxin.wang, jianjay.zhou, wangjie88, maz, linux-kernel > On Thu, Jul 24 2025 at 12:49, Thomas Gleixner wrote: > Thank you very much for your professional, friendly, and detailed response. Based on the clear modification suggestions you provided, I conducted retesting and validation using the following methods: 1) Start a virtual machine with 192U384G specification, and configure one VirtioNet network card and one VirtioSCSI disk. 2) After the virtual machine starts successfully, execute the following script inside the virtual machine. The interrupt number 30 is the VirtioNet MSI-x interrupt. for((;;)) do (for((i=0;i<192;i++)) do echo $i > /proc/irq/30/smp_affinity_list sleep 0.1 done) done After a 7x24-hour test, no error logs of the type "No irq handler for vector" were found, I believe this issue should have already been resolved. As you said, this fix cannot solve the problem of lost interrupts. I believe an effective solution to the issue of lost interrupts might be to modify the vifo module to avoid un-plug/plug irq, and instead use a more lightweight method to switch interrupt modes. Just like: vfio_irq_handler() if kvm_mode vfio_send_eventfd(kvm_irq_fd); else vfio_send_eventfd(qemu_irq_fd); However, this will bring about some troubles: 1) The kvm_mode variable should be protected, leading to performance loss. 2) The VFIO interface requires the passing of two eventfds. 3) Add another interface to implement mode switching. Do you have a better solution to fix this interrupt loss issue? There is a question that has been troubling me: Why are interrupts still reported after they have been masked and the interrupt remapping table entries have been disabled? Is this interrupt cached somewhere? > Hogan! > > > Hogan reported a vector setup race, which overwrites the interrupt > > descriptor in the per CPU vector array resulting in a disfunctional device. > > > > CPU0 CPU1 > > interrupt is raised in APIC IRR > > but not handled > > free_irq() > > per_cpu(vector_irq, CPU1)[vector] = VECTOR_SHUTDOWN; > > > > request_irq() common_interrupt() > > d = this_cpu_read(vector_irq[vector]); > > > > per_cpu(vector_irq, CPU1)[vector] = desc; > > > > if (d == VECTOR_SHUTDOWN) > > this_cpu_write(vector_irq[vector], VECTOR_UNUSED); > > > > free_irq() cannot observe the pending vector in the CPU1 APIC as there > > is no way to query the remote CPUs APIC IRR. > > > > This requires that request_irq() uses the same vector/CPU as the one > > which was freed, but this also can be triggered by a spurious interrupt. > > > > Prevent this by reevaluating vector_irq under the vector lock, which > > is held by the interrupt activation code when vector_irq is updated. > > Does this fix your problem? Thanks, Hogan -- 2.45.1 ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86/irq: Plug vector setup race 2025-08-01 14:56 ` Hogan Wang @ 2025-08-02 11:54 ` Thomas Gleixner 0 siblings, 0 replies; 3+ messages in thread From: Thomas Gleixner @ 2025-08-02 11:54 UTC (permalink / raw) To: Hogan Wang, x86, dave.hansen, kvm, alex.williamson Cc: weidong.huang, yechuan, hogan.wang, wangxinxin.wang, jianjay.zhou, wangjie88, maz, linux-kernel On Fri, Aug 01 2025 at 22:56, Hogan Wang wrote: > I believe an effective solution to the issue of lost interrupts > might be to modify the vifo module to avoid un-plug/plug irq, > and instead use a more lightweight method to switch interrupt > modes. Just like: > > vfio_irq_handler() > if kvm_mode > vfio_send_eventfd(kvm_irq_fd); > else > vfio_send_eventfd(qemu_irq_fd); > > However, this will bring about some troubles: > 1) The kvm_mode variable should be protected, leading to performance loss. > 2) The VFIO interface requires the passing of two eventfds. > 3) Add another interface to implement mode switching. > > Do you have a better solution to fix this interrupt loss issue? Interesting. I looked at vfio_irq_handler(), which is in the platform/ part of VFIO. The corresponding vfio_set_trigger(), which switches eventfds does the right thing: disable_irq(); update(trigger); enable_irq(); disable_irq() ensures that there is no interrupt handler in progress, so it becomes safe to switch the trigger in the data structure which is has been handed to request_irq() as @dev_id argument. For edge type interupts this ensures that a interrupt which arrives while disabled is retriggered on enable, so that no interrupt can get lost. The PCI variant is using the trigger itself as the @dev_id argument and therefore has to do the free_irq()/request_irq() dance. It shouldn't be hard to convert the PCI implementation over to the disable/enable scheme. > There is a question that has been troubling me: Why are interrupts > still reported after they have been masked and the interrupt remapping > table entries have been disabled? Is this interrupt cached somewhere? Let me bring back the picture I used before: CPU0 CPU1 vmenter(vCPU0) .... local_irq_disable() msi_set_affinity() #1 mask(MSI-X) vmexit() #2 ... interrupt is raised in APIC but not handled #3 really_mask(MSI-X) free_irq() mask(); #4 __synchronize_irq() msi_domain_deactivate() write_msg(0); x86_vector_deactivate() #5 per_cpu(vector_irq, cpu)[vector] = VECTOR_SHUTDOWN; #6 local_irq_enable() interrupt is handled and observes VECTOR_SHUTDOWN writes VECTOR_UNUSED request_irq() x86_vector_activate() per_cpu(vector_irq, cpu)[vector] = desc; msi_domain_deactivate() write_msg(msg); unmask(); #1 is the mask operation in the VM, which is trapped, i.e. the interrupt is not yet masked at the MSIX level. #2 The device raises the interupt _before_ the host can mask the interrupt at the PCI-MSIX level (#3). The interrupt is sent to the APIC of the target CPU 1, which sets the corresponding IRR bit in the APIC if the CPU cannot handle it at that point, because it has interrupts disabled. #4 cannot observe the pending IRR bit on CPU1's APIC and therefore concludes that there is no interrupt in flight. If the host side VMM manages to shut down the interrupt completely (#5) _before_ CPU1 reenables interrupts (#6), then CPU1 will observe VECTOR_SHUTDOWN and treats it as a spurious interrupt. The same problem exists on bare metal, when a driver leaves the device interrupts enabled and then does a free/request dance: CPU0 CPU1 .... local_irq_disable() #1 free_irq() #2 ... interrupt is raised in APIC but not handled #3 really_mask(MSI-X) #4 __synchronize_irq() msi_domain_deactivate() write_msg(0); x86_vector_deactivate() #5 per_cpu(vector_irq, cpu)[vector] = VECTOR_SHUTDOWN; #6 local_irq_enable() interrupt is handled and observes VECTOR_SHUTDOWN writes VECTOR_UNUSED request_irq() x86_vector_activate() per_cpu(vector_irq, cpu)[vector] = desc; msi_domain_deactivate() write_msg(msg); unmask(); See? Thanks, tglx ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-08-02 11:54 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <draft-87ikjhrhhh.ffs@tglx>
2025-07-31 12:45 ` [PATCH] x86/irq: Plug vector setup race Thomas Gleixner
2025-08-01 14:56 ` Hogan Wang
2025-08-02 11:54 ` Thomas Gleixner
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox